I1022 18:51:38.212184 6 e2e.go:243] Starting e2e run "2ca3cac9-56dc-4215-8ed6-81202124ad5e" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1603392697 - Will randomize all specs Will run 215 of 4413 specs Oct 22 18:51:38.408: INFO: >>> kubeConfig: /root/.kube/config Oct 22 18:51:38.411: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 22 18:51:38.438: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 22 18:51:38.473: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 22 18:51:38.473: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Oct 22 18:51:38.473: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 22 18:51:38.481: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Oct 22 18:51:38.481: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 22 18:51:38.481: INFO: e2e test version: v1.15.12 Oct 22 18:51:38.482: INFO: kube-apiserver version: v1.15.11 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:51:38.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred Oct 22 18:51:38.564: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Oct 22 18:51:38.566: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 22 18:51:38.573: INFO: Waiting for terminating namespaces to be deleted... Oct 22 18:51:38.575: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Oct 22 18:51:38.580: INFO: kindnet-7bsvw from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Oct 22 18:51:38.580: INFO: Container kindnet-cni ready: true, restart count 0 Oct 22 18:51:38.580: INFO: kube-proxy-mtljr from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Oct 22 18:51:38.580: INFO: Container kube-proxy ready: true, restart count 0 Oct 22 18:51:38.580: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Oct 22 18:51:38.584: INFO: kindnet-djqgh from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Oct 22 18:51:38.584: INFO: Container kindnet-cni ready: true, restart count 0 Oct 22 18:51:38.584: INFO: kube-proxy-52wt5 from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Oct 22 18:51:38.584: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1640652a0b492d93], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:51:39.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1953" for this suite. Oct 22 18:51:45.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:51:45.838: INFO: namespace sched-pred-1953 deletion completed in 6.109304126s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.355 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:51:45.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-65ea282d-03ef-4481-ad4f-4c08b1398f28 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-65ea282d-03ef-4481-ad4f-4c08b1398f28 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:51:51.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1990" for this suite. Oct 22 18:52:13.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:52:14.057: INFO: namespace configmap-1990 deletion completed in 22.090863638s • [SLOW TEST:28.219 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:52:14.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a504a3f1-af68-4c9f-ac94-716a1b5cd99b STEP: Creating a pod to test consume configMaps Oct 22 18:52:14.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-756568df-a464-4f8a-a31e-122fd918a36b" in namespace "configmap-5089" to be "success or failure" Oct 22 18:52:14.402: INFO: Pod "pod-configmaps-756568df-a464-4f8a-a31e-122fd918a36b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.671397ms Oct 22 18:52:16.407: INFO: Pod "pod-configmaps-756568df-a464-4f8a-a31e-122fd918a36b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032925277s Oct 22 18:52:18.558: INFO: Pod "pod-configmaps-756568df-a464-4f8a-a31e-122fd918a36b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183979404s Oct 22 18:52:20.581: INFO: Pod "pod-configmaps-756568df-a464-4f8a-a31e-122fd918a36b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207141702s STEP: Saw pod success Oct 22 18:52:20.581: INFO: Pod "pod-configmaps-756568df-a464-4f8a-a31e-122fd918a36b" satisfied condition "success or failure" Oct 22 18:52:20.584: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-756568df-a464-4f8a-a31e-122fd918a36b container configmap-volume-test: STEP: delete the pod Oct 22 18:52:20.697: INFO: Waiting for pod pod-configmaps-756568df-a464-4f8a-a31e-122fd918a36b to disappear Oct 22 18:52:20.708: INFO: Pod pod-configmaps-756568df-a464-4f8a-a31e-122fd918a36b no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:52:20.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5089" for this suite. Oct 22 18:52:26.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:52:26.843: INFO: namespace configmap-5089 deletion completed in 6.106455755s • [SLOW TEST:12.786 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:52:26.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Oct 22 18:52:26.942: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Oct 22 18:52:26.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3924' Oct 22 18:52:30.739: INFO: stderr: "" Oct 22 18:52:30.739: INFO: stdout: "service/redis-slave created\n" Oct 22 18:52:30.740: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Oct 22 18:52:30.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3924' Oct 22 18:52:30.999: INFO: stderr: "" Oct 22 18:52:30.999: INFO: stdout: "service/redis-master created\n" Oct 22 18:52:30.999: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Oct 22 18:52:30.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3924' Oct 22 18:52:31.306: INFO: stderr: "" Oct 22 18:52:31.306: INFO: stdout: "service/frontend created\n" Oct 22 18:52:31.307: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Oct 22 18:52:31.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3924' Oct 22 18:52:31.592: INFO: stderr: "" Oct 22 18:52:31.592: INFO: stdout: "deployment.apps/frontend created\n" Oct 22 18:52:31.592: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 22 18:52:31.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3924' Oct 22 18:52:31.912: INFO: stderr: "" Oct 22 18:52:31.912: INFO: stdout: "deployment.apps/redis-master created\n" Oct 22 18:52:31.913: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Oct 22 18:52:31.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3924' Oct 22 18:52:32.159: INFO: stderr: "" Oct 22 18:52:32.159: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Oct 22 18:52:32.159: INFO: Waiting for all frontend pods to be Running. Oct 22 18:52:42.209: INFO: Waiting for frontend to serve content. Oct 22 18:52:42.227: INFO: Trying to add a new entry to the guestbook. Oct 22 18:52:42.239: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Oct 22 18:52:42.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3924' Oct 22 18:52:42.398: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 22 18:52:42.398: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Oct 22 18:52:42.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3924' Oct 22 18:52:42.553: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 22 18:52:42.553: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Oct 22 18:52:42.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3924' Oct 22 18:52:42.685: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 22 18:52:42.685: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 22 18:52:42.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3924' Oct 22 18:52:42.789: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 22 18:52:42.789: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 22 18:52:42.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3924' Oct 22 18:52:42.892: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 22 18:52:42.892: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Oct 22 18:52:42.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3924' Oct 22 18:52:43.123: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 22 18:52:43.123: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:52:43.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3924" for this suite. Oct 22 18:53:27.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:53:27.547: INFO: namespace kubectl-3924 deletion completed in 44.394846951s • [SLOW TEST:60.704 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:53:27.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Oct 22 18:53:27.614: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d553b7e-fe13-428a-a69c-5f0d914b247a" in namespace "downward-api-270" to be "success or failure" Oct 22 18:53:27.642: INFO: Pod "downwardapi-volume-9d553b7e-fe13-428a-a69c-5f0d914b247a": Phase="Pending", Reason="", readiness=false. Elapsed: 27.335381ms Oct 22 18:53:29.645: INFO: Pod "downwardapi-volume-9d553b7e-fe13-428a-a69c-5f0d914b247a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031097618s Oct 22 18:53:31.649: INFO: Pod "downwardapi-volume-9d553b7e-fe13-428a-a69c-5f0d914b247a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034924044s STEP: Saw pod success Oct 22 18:53:31.649: INFO: Pod "downwardapi-volume-9d553b7e-fe13-428a-a69c-5f0d914b247a" satisfied condition "success or failure" Oct 22 18:53:31.652: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9d553b7e-fe13-428a-a69c-5f0d914b247a container client-container: STEP: delete the pod Oct 22 18:53:31.754: INFO: Waiting for pod downwardapi-volume-9d553b7e-fe13-428a-a69c-5f0d914b247a to disappear Oct 22 18:53:31.758: INFO: Pod downwardapi-volume-9d553b7e-fe13-428a-a69c-5f0d914b247a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:53:31.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-270" for this suite. Oct 22 18:53:37.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:53:37.896: INFO: namespace downward-api-270 deletion completed in 6.116797233s • [SLOW TEST:10.348 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:53:37.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Oct 22 18:53:37.944: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:53:46.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7021" for this suite. Oct 22 18:54:10.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:54:10.470: INFO: namespace init-container-7021 deletion completed in 24.1323594s • [SLOW TEST:32.573 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:54:10.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-9qwg STEP: Creating a pod to test atomic-volume-subpath Oct 22 18:54:10.711: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9qwg" in namespace "subpath-9120" to be "success or failure" Oct 22 18:54:10.715: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166325ms Oct 22 18:54:12.726: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014500624s Oct 22 18:54:14.730: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Running", Reason="", readiness=true. Elapsed: 4.018777384s Oct 22 18:54:16.734: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Running", Reason="", readiness=true. Elapsed: 6.022681632s Oct 22 18:54:18.768: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Running", Reason="", readiness=true. Elapsed: 8.056480015s Oct 22 18:54:20.771: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Running", Reason="", readiness=true. Elapsed: 10.060465861s Oct 22 18:54:22.776: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Running", Reason="", readiness=true. Elapsed: 12.064873222s Oct 22 18:54:24.780: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Running", Reason="", readiness=true. Elapsed: 14.069032732s Oct 22 18:54:26.784: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Running", Reason="", readiness=true. Elapsed: 16.072880073s Oct 22 18:54:28.787: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Running", Reason="", readiness=true. Elapsed: 18.07628761s Oct 22 18:54:30.792: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Running", Reason="", readiness=true. Elapsed: 20.080683377s Oct 22 18:54:32.815: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Running", Reason="", readiness=true. Elapsed: 22.104207994s Oct 22 18:54:34.834: INFO: Pod "pod-subpath-test-configmap-9qwg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.122698751s STEP: Saw pod success Oct 22 18:54:34.834: INFO: Pod "pod-subpath-test-configmap-9qwg" satisfied condition "success or failure" Oct 22 18:54:34.837: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-9qwg container test-container-subpath-configmap-9qwg: STEP: delete the pod Oct 22 18:54:34.859: INFO: Waiting for pod pod-subpath-test-configmap-9qwg to disappear Oct 22 18:54:34.868: INFO: Pod pod-subpath-test-configmap-9qwg no longer exists STEP: Deleting pod pod-subpath-test-configmap-9qwg Oct 22 18:54:34.868: INFO: Deleting pod "pod-subpath-test-configmap-9qwg" in namespace "subpath-9120" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:54:34.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9120" for this suite. Oct 22 18:54:40.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:54:40.959: INFO: namespace subpath-9120 deletion completed in 6.086391787s • [SLOW TEST:30.489 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:54:40.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Oct 22 18:54:41.063: INFO: namespace kubectl-7730 Oct 22 18:54:41.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7730' Oct 22 18:54:41.343: INFO: stderr: "" Oct 22 18:54:41.343: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Oct 22 18:54:42.347: INFO: Selector matched 1 pods for map[app:redis] Oct 22 18:54:42.347: INFO: Found 0 / 1 Oct 22 18:54:43.347: INFO: Selector matched 1 pods for map[app:redis] Oct 22 18:54:43.347: INFO: Found 0 / 1 Oct 22 18:54:44.347: INFO: Selector matched 1 pods for map[app:redis] Oct 22 18:54:44.347: INFO: Found 0 / 1 Oct 22 18:54:45.348: INFO: Selector matched 1 pods for map[app:redis] Oct 22 18:54:45.348: INFO: Found 1 / 1 Oct 22 18:54:45.348: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 22 18:54:45.351: INFO: Selector matched 1 pods for map[app:redis] Oct 22 18:54:45.351: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 22 18:54:45.351: INFO: wait on redis-master startup in kubectl-7730 Oct 22 18:54:45.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7lq6t redis-master --namespace=kubectl-7730' Oct 22 18:54:45.455: INFO: stderr: "" Oct 22 18:54:45.455: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Oct 18:54:44.180 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Oct 18:54:44.180 # Server started, Redis version 3.2.12\n1:M 22 Oct 18:54:44.180 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Oct 18:54:44.180 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Oct 22 18:54:45.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7730' Oct 22 18:54:45.607: INFO: stderr: "" Oct 22 18:54:45.607: INFO: stdout: "service/rm2 exposed\n" Oct 22 18:54:45.642: INFO: Service rm2 in namespace kubectl-7730 found. STEP: exposing service Oct 22 18:54:47.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7730' Oct 22 18:54:47.853: INFO: stderr: "" Oct 22 18:54:47.853: INFO: stdout: "service/rm3 exposed\n" Oct 22 18:54:47.881: INFO: Service rm3 in namespace kubectl-7730 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:54:49.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7730" for this suite. Oct 22 18:55:11.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:55:11.986: INFO: namespace kubectl-7730 deletion completed in 22.094454722s • [SLOW TEST:31.026 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:55:11.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-4443b2b7-1c33-49ce-9489-dd867dc88ab9 STEP: Creating a pod to test consume configMaps Oct 22 18:55:12.130: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de202bb1-ad51-4ab1-bab5-3e6b9d6a562d" in namespace "projected-3117" to be "success or failure" Oct 22 18:55:12.145: INFO: Pod "pod-projected-configmaps-de202bb1-ad51-4ab1-bab5-3e6b9d6a562d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.351508ms Oct 22 18:55:14.158: INFO: Pod "pod-projected-configmaps-de202bb1-ad51-4ab1-bab5-3e6b9d6a562d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02774089s Oct 22 18:55:16.161: INFO: Pod "pod-projected-configmaps-de202bb1-ad51-4ab1-bab5-3e6b9d6a562d": Phase="Running", Reason="", readiness=true. Elapsed: 4.031276529s Oct 22 18:55:18.165: INFO: Pod "pod-projected-configmaps-de202bb1-ad51-4ab1-bab5-3e6b9d6a562d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03518396s STEP: Saw pod success Oct 22 18:55:18.165: INFO: Pod "pod-projected-configmaps-de202bb1-ad51-4ab1-bab5-3e6b9d6a562d" satisfied condition "success or failure" Oct 22 18:55:18.167: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-de202bb1-ad51-4ab1-bab5-3e6b9d6a562d container projected-configmap-volume-test: STEP: delete the pod Oct 22 18:55:18.202: INFO: Waiting for pod pod-projected-configmaps-de202bb1-ad51-4ab1-bab5-3e6b9d6a562d to disappear Oct 22 18:55:18.253: INFO: Pod pod-projected-configmaps-de202bb1-ad51-4ab1-bab5-3e6b9d6a562d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:55:18.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3117" for this suite. Oct 22 18:55:24.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:55:24.344: INFO: namespace projected-3117 deletion completed in 6.087470862s • [SLOW TEST:12.358 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:55:24.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:55:29.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3088" for this suite. Oct 22 18:55:49.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:55:49.578: INFO: namespace replication-controller-3088 deletion completed in 20.103526666s • [SLOW TEST:25.234 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:55:49.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Oct 22 18:55:53.653: INFO: Pod pod-hostip-a6617c2f-1637-445a-a92e-0ab187c1c9d4 has hostIP: 172.18.0.6 [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:55:53.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6883" for this suite. Oct 22 18:56:15.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:56:15.742: INFO: namespace pods-6883 deletion completed in 22.085452254s • [SLOW TEST:26.164 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:56:15.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6756.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6756.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6756.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6756.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6756.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6756.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6756.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6756.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6756.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6756.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 108.10.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.10.108_udp@PTR;check="$$(dig +tcp +noall +answer +search 108.10.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.10.108_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6756.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6756.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6756.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6756.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6756.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6756.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6756.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6756.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6756.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6756.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6756.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 108.10.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.10.108_udp@PTR;check="$$(dig +tcp +noall +answer +search 108.10.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.10.108_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 22 18:56:21.962: INFO: Unable to read wheezy_udp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:21.966: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:21.969: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:21.972: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:21.992: INFO: Unable to read jessie_udp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:21.994: INFO: Unable to read jessie_tcp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:21.996: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:21.999: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:22.014: INFO: Lookups using dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a failed for: [wheezy_udp@dns-test-service.dns-6756.svc.cluster.local wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local jessie_udp@dns-test-service.dns-6756.svc.cluster.local jessie_tcp@dns-test-service.dns-6756.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local] Oct 22 18:56:27.018: INFO: Unable to read wheezy_udp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:27.021: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:27.024: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:27.027: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:27.050: INFO: Unable to read jessie_udp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:27.052: INFO: Unable to read jessie_tcp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:27.055: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:27.058: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:27.074: INFO: Lookups using dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a failed for: [wheezy_udp@dns-test-service.dns-6756.svc.cluster.local wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local jessie_udp@dns-test-service.dns-6756.svc.cluster.local jessie_tcp@dns-test-service.dns-6756.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local] Oct 22 18:56:32.019: INFO: Unable to read wheezy_udp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:32.022: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:32.025: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:32.028: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:32.047: INFO: Unable to read jessie_udp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:32.050: INFO: Unable to read jessie_tcp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:32.053: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:32.055: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:32.073: INFO: Lookups using dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a failed for: [wheezy_udp@dns-test-service.dns-6756.svc.cluster.local wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local jessie_udp@dns-test-service.dns-6756.svc.cluster.local jessie_tcp@dns-test-service.dns-6756.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local] Oct 22 18:56:37.044: INFO: Unable to read wheezy_udp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:37.048: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:37.050: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:37.053: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:37.072: INFO: Unable to read jessie_udp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:37.074: INFO: Unable to read jessie_tcp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:37.077: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:37.079: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:37.095: INFO: Lookups using dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a failed for: [wheezy_udp@dns-test-service.dns-6756.svc.cluster.local wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local jessie_udp@dns-test-service.dns-6756.svc.cluster.local jessie_tcp@dns-test-service.dns-6756.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local] Oct 22 18:56:42.019: INFO: Unable to read wheezy_udp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:42.023: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:42.026: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:42.029: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:42.049: INFO: Unable to read jessie_udp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:42.052: INFO: Unable to read jessie_tcp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:42.055: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:42.057: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:42.074: INFO: Lookups using dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a failed for: [wheezy_udp@dns-test-service.dns-6756.svc.cluster.local wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local jessie_udp@dns-test-service.dns-6756.svc.cluster.local jessie_tcp@dns-test-service.dns-6756.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local] Oct 22 18:56:47.018: INFO: Unable to read wheezy_udp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:47.021: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:47.024: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:47.027: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:47.051: INFO: Unable to read jessie_udp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:47.054: INFO: Unable to read jessie_tcp@dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:47.057: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:47.060: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local from pod dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a: the server could not find the requested resource (get pods dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a) Oct 22 18:56:47.076: INFO: Lookups using dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a failed for: [wheezy_udp@dns-test-service.dns-6756.svc.cluster.local wheezy_tcp@dns-test-service.dns-6756.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local jessie_udp@dns-test-service.dns-6756.svc.cluster.local jessie_tcp@dns-test-service.dns-6756.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6756.svc.cluster.local] Oct 22 18:56:52.077: INFO: DNS probes using dns-6756/dns-test-8b45358f-d986-4c0e-8080-04d78fa5301a succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:56:53.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6756" for this suite. Oct 22 18:56:59.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:56:59.226: INFO: namespace dns-6756 deletion completed in 6.11032797s • [SLOW TEST:43.483 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:56:59.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Oct 22 18:56:59.327: INFO: Pod name rollover-pod: Found 0 pods out of 1 Oct 22 18:57:04.331: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 22 18:57:04.331: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Oct 22 18:57:06.335: INFO: Creating deployment "test-rollover-deployment" Oct 22 18:57:06.345: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Oct 22 18:57:08.351: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Oct 22 18:57:08.356: INFO: Ensure that both replica sets have 1 created replica Oct 22 18:57:08.361: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Oct 22 18:57:08.367: INFO: Updating deployment test-rollover-deployment Oct 22 18:57:08.367: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Oct 22 18:57:10.375: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Oct 22 18:57:10.381: INFO: Make sure deployment "test-rollover-deployment" is complete Oct 22 18:57:10.387: INFO: all replica sets need to contain the pod-template-hash label Oct 22 18:57:10.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989828, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 22 18:57:12.395: INFO: all replica sets need to contain the pod-template-hash label Oct 22 18:57:12.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989828, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 22 18:57:14.394: INFO: all replica sets need to contain the pod-template-hash label Oct 22 18:57:14.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989832, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 22 18:57:16.395: INFO: all replica sets need to contain the pod-template-hash label Oct 22 18:57:16.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989832, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 22 18:57:18.395: INFO: all replica sets need to contain the pod-template-hash label Oct 22 18:57:18.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989832, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 22 18:57:20.394: INFO: all replica sets need to contain the pod-template-hash label Oct 22 18:57:20.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989832, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 22 18:57:22.395: INFO: all replica sets need to contain the pod-template-hash label Oct 22 18:57:22.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989832, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738989826, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 22 18:57:24.394: INFO: Oct 22 18:57:24.394: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Oct 22 18:57:24.400: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-4991,SelfLink:/apis/apps/v1/namespaces/deployment-4991/deployments/test-rollover-deployment,UID:b275c382-fdec-45c0-9ce3-abcb19b2ed36,ResourceVersion:5306685,Generation:2,CreationTimestamp:2020-10-22 18:57:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-10-22 18:57:06 +0000 UTC 2020-10-22 18:57:06 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-10-22 18:57:22 +0000 UTC 2020-10-22 18:57:06 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Oct 22 18:57:24.402: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-4991,SelfLink:/apis/apps/v1/namespaces/deployment-4991/replicasets/test-rollover-deployment-854595fc44,UID:e02385c7-17e4-4457-a2ad-d2e2ef11d8d9,ResourceVersion:5306674,Generation:2,CreationTimestamp:2020-10-22 18:57:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b275c382-fdec-45c0-9ce3-abcb19b2ed36 0xc002abfeb7 0xc002abfeb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Oct 22 18:57:24.402: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Oct 22 18:57:24.402: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-4991,SelfLink:/apis/apps/v1/namespaces/deployment-4991/replicasets/test-rollover-controller,UID:d54eeff5-a7f3-4919-baf9-3c241525cbb5,ResourceVersion:5306683,Generation:2,CreationTimestamp:2020-10-22 18:56:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b275c382-fdec-45c0-9ce3-abcb19b2ed36 0xc002abfde7 0xc002abfde8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Oct 22 18:57:24.402: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-4991,SelfLink:/apis/apps/v1/namespaces/deployment-4991/replicasets/test-rollover-deployment-9b8b997cf,UID:9a5561ce-9c4c-4c27-93d9-5ae4c4eb6abe,ResourceVersion:5306634,Generation:2,CreationTimestamp:2020-10-22 18:57:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b275c382-fdec-45c0-9ce3-abcb19b2ed36 0xc002abff80 0xc002abff81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Oct 22 18:57:24.405: INFO: Pod "test-rollover-deployment-854595fc44-bsf67" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-bsf67,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-4991,SelfLink:/api/v1/namespaces/deployment-4991/pods/test-rollover-deployment-854595fc44-bsf67,UID:8ad4a18e-191b-482b-96fb-00033a77b133,ResourceVersion:5306652,Generation:0,CreationTimestamp:2020-10-22 18:57:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 e02385c7-17e4-4457-a2ad-d2e2ef11d8d9 0xc0020dac87 0xc0020dac88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-42zwk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-42zwk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-42zwk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020dad00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020dad20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 18:57:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 18:57:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 18:57:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 18:57:08 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.136,StartTime:2020-10-22 18:57:08 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-10-22 18:57:12 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://56dea3737b216afc1a36fc6e03b48c822c8c1849b01f2ffb978b0b59e71d1cd8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:57:24.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4991" for this suite. Oct 22 18:57:30.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:57:30.504: INFO: namespace deployment-4991 deletion completed in 6.096875619s • [SLOW TEST:31.278 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:57:30.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Oct 22 18:57:30.629: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:57:45.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5935" for this suite. Oct 22 18:57:51.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:57:51.503: INFO: namespace pods-5935 deletion completed in 6.095313509s • [SLOW TEST:20.999 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:57:51.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Oct 22 18:57:51.579: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e621918d-2758-426a-aa18-6d13b09561d7" in namespace "downward-api-2316" to be "success or failure" Oct 22 18:57:51.595: INFO: Pod "downwardapi-volume-e621918d-2758-426a-aa18-6d13b09561d7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.006084ms Oct 22 18:57:53.599: INFO: Pod "downwardapi-volume-e621918d-2758-426a-aa18-6d13b09561d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020224751s Oct 22 18:57:55.603: INFO: Pod "downwardapi-volume-e621918d-2758-426a-aa18-6d13b09561d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024161392s STEP: Saw pod success Oct 22 18:57:55.603: INFO: Pod "downwardapi-volume-e621918d-2758-426a-aa18-6d13b09561d7" satisfied condition "success or failure" Oct 22 18:57:55.606: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e621918d-2758-426a-aa18-6d13b09561d7 container client-container: STEP: delete the pod Oct 22 18:57:55.620: INFO: Waiting for pod downwardapi-volume-e621918d-2758-426a-aa18-6d13b09561d7 to disappear Oct 22 18:57:55.637: INFO: Pod downwardapi-volume-e621918d-2758-426a-aa18-6d13b09561d7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:57:55.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2316" for this suite. Oct 22 18:58:01.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:58:01.757: INFO: namespace downward-api-2316 deletion completed in 6.116754366s • [SLOW TEST:10.253 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:58:01.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:58:07.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6265" for this suite. Oct 22 18:58:13.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:58:13.626: INFO: namespace watch-6265 deletion completed in 6.161384347s • [SLOW TEST:11.868 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:58:13.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Oct 22 18:58:18.237: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-627 pod-service-account-f8c88746-7e0b-4d55-a3d5-fc9d1de12a59 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Oct 22 18:58:18.481: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-627 pod-service-account-f8c88746-7e0b-4d55-a3d5-fc9d1de12a59 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Oct 22 18:58:18.700: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-627 pod-service-account-f8c88746-7e0b-4d55-a3d5-fc9d1de12a59 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:58:18.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-627" for this suite. Oct 22 18:58:24.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:58:25.020: INFO: namespace svcaccounts-627 deletion completed in 6.099778807s • [SLOW TEST:11.393 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:58:25.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1022 18:58:36.499269 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Oct 22 18:58:36.499: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:58:36.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3485" for this suite. Oct 22 18:58:42.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:58:42.611: INFO: namespace gc-3485 deletion completed in 6.110039919s • [SLOW TEST:17.591 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:58:42.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:58:48.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3714" for this suite. Oct 22 18:59:38.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 18:59:38.918: INFO: namespace kubelet-test-3714 deletion completed in 50.117591585s • [SLOW TEST:56.307 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 18:59:38.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-447 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-447 STEP: Deleting pre-stop pod Oct 22 18:59:52.057: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 18:59:52.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-447" for this suite. Oct 22 19:00:30.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 19:00:30.185: INFO: namespace prestop-447 deletion completed in 38.097527516s • [SLOW TEST:51.266 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 19:00:30.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Oct 22 19:00:30.301: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8406,SelfLink:/api/v1/namespaces/watch-8406/configmaps/e2e-watch-test-resource-version,UID:aa87ee22-2a55-429a-838f-a4c9f2b12ea5,ResourceVersion:5307596,Generation:0,CreationTimestamp:2020-10-22 19:00:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Oct 22 19:00:30.301: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8406,SelfLink:/api/v1/namespaces/watch-8406/configmaps/e2e-watch-test-resource-version,UID:aa87ee22-2a55-429a-838f-a4c9f2b12ea5,ResourceVersion:5307597,Generation:0,CreationTimestamp:2020-10-22 19:00:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 19:00:30.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8406" for this suite. Oct 22 19:00:36.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 19:00:36.387: INFO: namespace watch-8406 deletion completed in 6.082068265s • [SLOW TEST:6.201 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 19:00:36.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-38d2737a-51cb-40e5-b329-089d102f5434 STEP: Creating a pod to test consume secrets Oct 22 19:00:36.478: INFO: Waiting up to 5m0s for pod "pod-secrets-4802b427-6e70-44e1-8721-d5948a5db2aa" in namespace "secrets-9335" to be "success or failure" Oct 22 19:00:36.507: INFO: Pod "pod-secrets-4802b427-6e70-44e1-8721-d5948a5db2aa": Phase="Pending", Reason="", readiness=false. Elapsed: 28.275479ms Oct 22 19:00:38.511: INFO: Pod "pod-secrets-4802b427-6e70-44e1-8721-d5948a5db2aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032247373s Oct 22 19:00:40.753: INFO: Pod "pod-secrets-4802b427-6e70-44e1-8721-d5948a5db2aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.27432396s STEP: Saw pod success Oct 22 19:00:40.753: INFO: Pod "pod-secrets-4802b427-6e70-44e1-8721-d5948a5db2aa" satisfied condition "success or failure" Oct 22 19:00:40.755: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-4802b427-6e70-44e1-8721-d5948a5db2aa container secret-volume-test: STEP: delete the pod Oct 22 19:00:40.875: INFO: Waiting for pod pod-secrets-4802b427-6e70-44e1-8721-d5948a5db2aa to disappear Oct 22 19:00:40.895: INFO: Pod pod-secrets-4802b427-6e70-44e1-8721-d5948a5db2aa no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 19:00:40.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9335" for this suite. Oct 22 19:00:46.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 19:00:47.035: INFO: namespace secrets-9335 deletion completed in 6.137077822s • [SLOW TEST:10.649 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 19:00:47.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-2287 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2287 to expose endpoints map[] Oct 22 19:00:47.459: INFO: successfully validated that service endpoint-test2 in namespace services-2287 exposes endpoints map[] (140.346157ms elapsed) STEP: Creating pod pod1 in namespace services-2287 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2287 to expose endpoints map[pod1:[80]] Oct 22 19:00:51.887: INFO: successfully validated that service endpoint-test2 in namespace services-2287 exposes endpoints map[pod1:[80]] (4.325538377s elapsed) STEP: Creating pod pod2 in namespace services-2287 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2287 to expose endpoints map[pod1:[80] pod2:[80]] Oct 22 19:00:55.118: INFO: successfully validated that service endpoint-test2 in namespace services-2287 exposes endpoints map[pod1:[80] pod2:[80]] (3.228006982s elapsed) STEP: Deleting pod pod1 in namespace services-2287 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2287 to expose endpoints map[pod2:[80]] Oct 22 19:00:56.205: INFO: successfully validated that service endpoint-test2 in namespace services-2287 exposes endpoints map[pod2:[80]] (1.082956614s elapsed) STEP: Deleting pod pod2 in namespace services-2287 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2287 to expose endpoints map[] Oct 22 19:00:57.257: INFO: successfully validated that service endpoint-test2 in namespace services-2287 exposes endpoints map[] (1.047590362s elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 19:00:57.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2287" for this suite. Oct 22 19:01:03.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 19:01:03.740: INFO: namespace services-2287 deletion completed in 6.14729891s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:16.704 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 19:01:03.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Oct 22 19:01:03.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9708' Oct 22 19:01:04.085: INFO: stderr: "" Oct 22 19:01:04.085: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Oct 22 19:01:05.090: INFO: Selector matched 1 pods for map[app:redis] Oct 22 19:01:05.090: INFO: Found 0 / 1 Oct 22 19:01:06.089: INFO: Selector matched 1 pods for map[app:redis] Oct 22 19:01:06.089: INFO: Found 0 / 1 Oct 22 19:01:07.090: INFO: Selector matched 1 pods for map[app:redis] Oct 22 19:01:07.090: INFO: Found 0 / 1 Oct 22 19:01:08.090: INFO: Selector matched 1 pods for map[app:redis] Oct 22 19:01:08.090: INFO: Found 1 / 1 Oct 22 19:01:08.090: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 22 19:01:08.093: INFO: Selector matched 1 pods for map[app:redis] Oct 22 19:01:08.093: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Oct 22 19:01:08.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dwt9p redis-master --namespace=kubectl-9708' Oct 22 19:01:08.224: INFO: stderr: "" Oct 22 19:01:08.224: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Oct 19:01:06.594 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Oct 19:01:06.595 # Server started, Redis version 3.2.12\n1:M 22 Oct 19:01:06.595 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Oct 19:01:06.595 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Oct 22 19:01:08.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dwt9p redis-master --namespace=kubectl-9708 --tail=1' Oct 22 19:01:08.344: INFO: stderr: "" Oct 22 19:01:08.344: INFO: stdout: "1:M 22 Oct 19:01:06.595 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Oct 22 19:01:08.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dwt9p redis-master --namespace=kubectl-9708 --limit-bytes=1' Oct 22 19:01:08.457: INFO: stderr: "" Oct 22 19:01:08.457: INFO: stdout: " " STEP: exposing timestamps Oct 22 19:01:08.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dwt9p redis-master --namespace=kubectl-9708 --tail=1 --timestamps' Oct 22 19:01:08.572: INFO: stderr: "" Oct 22 19:01:08.572: INFO: stdout: "2020-10-22T19:01:06.595353598Z 1:M 22 Oct 19:01:06.595 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Oct 22 19:01:11.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dwt9p redis-master --namespace=kubectl-9708 --since=1s' Oct 22 19:01:11.184: INFO: stderr: "" Oct 22 19:01:11.184: INFO: stdout: "" Oct 22 19:01:11.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-dwt9p redis-master --namespace=kubectl-9708 --since=24h' Oct 22 19:01:11.294: INFO: stderr: "" Oct 22 19:01:11.294: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Oct 19:01:06.594 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Oct 19:01:06.595 # Server started, Redis version 3.2.12\n1:M 22 Oct 19:01:06.595 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Oct 19:01:06.595 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Oct 22 19:01:11.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9708' Oct 22 19:01:11.403: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 22 19:01:11.403: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Oct 22 19:01:11.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9708' Oct 22 19:01:11.507: INFO: stderr: "No resources found.\n" Oct 22 19:01:11.507: INFO: stdout: "" Oct 22 19:01:11.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9708 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 22 19:01:11.611: INFO: stderr: "" Oct 22 19:01:11.611: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 19:01:11.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9708" for this suite. Oct 22 19:01:17.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 19:01:17.801: INFO: namespace kubectl-9708 deletion completed in 6.186468692s • [SLOW TEST:14.061 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 19:01:17.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-76237fe8-72bc-4fc4-a49b-0fd72ff79770 STEP: Creating a pod to test consume secrets Oct 22 19:01:17.900: INFO: Waiting up to 5m0s for pod "pod-secrets-6e33df24-45dd-49c5-872c-71ed9d59368f" in namespace "secrets-3615" to be "success or failure" Oct 22 19:01:17.917: INFO: Pod "pod-secrets-6e33df24-45dd-49c5-872c-71ed9d59368f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.109816ms Oct 22 19:01:19.921: INFO: Pod "pod-secrets-6e33df24-45dd-49c5-872c-71ed9d59368f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020779413s Oct 22 19:01:21.925: INFO: Pod "pod-secrets-6e33df24-45dd-49c5-872c-71ed9d59368f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024916781s STEP: Saw pod success Oct 22 19:01:21.925: INFO: Pod "pod-secrets-6e33df24-45dd-49c5-872c-71ed9d59368f" satisfied condition "success or failure" Oct 22 19:01:21.928: INFO: Trying to get logs from node iruya-worker pod pod-secrets-6e33df24-45dd-49c5-872c-71ed9d59368f container secret-volume-test: STEP: delete the pod Oct 22 19:01:21.962: INFO: Waiting for pod pod-secrets-6e33df24-45dd-49c5-872c-71ed9d59368f to disappear Oct 22 19:01:21.975: INFO: Pod pod-secrets-6e33df24-45dd-49c5-872c-71ed9d59368f no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 19:01:21.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3615" for this suite. Oct 22 19:01:27.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 19:01:28.071: INFO: namespace secrets-3615 deletion completed in 6.091812039s • [SLOW TEST:10.270 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 19:01:28.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-f2853898-4f81-4568-886a-653c838cc8f3 STEP: Creating configMap with name cm-test-opt-upd-9525a18b-e78f-469f-b981-0317cdc356ac STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f2853898-4f81-4568-886a-653c838cc8f3 STEP: Updating configmap cm-test-opt-upd-9525a18b-e78f-469f-b981-0317cdc356ac STEP: Creating configMap with name cm-test-opt-create-ddcaccaf-b89c-413f-a72f-8a8cba73187c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Oct 22 19:01:38.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5195" for this suite. Oct 22 19:02:02.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Oct 22 19:02:02.572: INFO: namespace projected-5195 deletion completed in 24.082182655s • [SLOW TEST:34.499 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Oct 22 19:02:02.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Oct 22 19:02:02.637: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 19:02:08.887: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c106041c-7e2f-4e36-a23d-7837cdcce91b" in namespace "projected-3001" to be "success or failure"
Oct 22 19:02:08.892: INFO: Pod "downwardapi-volume-c106041c-7e2f-4e36-a23d-7837cdcce91b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.965472ms
Oct 22 19:02:10.897: INFO: Pod "downwardapi-volume-c106041c-7e2f-4e36-a23d-7837cdcce91b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009615121s
Oct 22 19:02:12.900: INFO: Pod "downwardapi-volume-c106041c-7e2f-4e36-a23d-7837cdcce91b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013003651s
STEP: Saw pod success
Oct 22 19:02:12.900: INFO: Pod "downwardapi-volume-c106041c-7e2f-4e36-a23d-7837cdcce91b" satisfied condition "success or failure"
Oct 22 19:02:12.902: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c106041c-7e2f-4e36-a23d-7837cdcce91b container client-container: 
STEP: delete the pod
Oct 22 19:02:12.972: INFO: Waiting for pod downwardapi-volume-c106041c-7e2f-4e36-a23d-7837cdcce91b to disappear
Oct 22 19:02:13.046: INFO: Pod downwardapi-volume-c106041c-7e2f-4e36-a23d-7837cdcce91b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:02:13.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3001" for this suite.
Oct 22 19:02:19.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:02:19.217: INFO: namespace projected-3001 deletion completed in 6.166638265s

• [SLOW TEST:10.401 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:02:19.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-454fc8f7-bea9-43a5-b568-6f117f3fdcaf in namespace container-probe-1376
Oct 22 19:02:23.310: INFO: Started pod test-webserver-454fc8f7-bea9-43a5-b568-6f117f3fdcaf in namespace container-probe-1376
STEP: checking the pod's current state and verifying that restartCount is present
Oct 22 19:02:23.313: INFO: Initial restart count of pod test-webserver-454fc8f7-bea9-43a5-b568-6f117f3fdcaf is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:06:24.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1376" for this suite.
Oct 22 19:06:30.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:06:30.265: INFO: namespace container-probe-1376 deletion completed in 6.098107954s

• [SLOW TEST:251.048 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:06:30.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct 22 19:06:34.382: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:06:34.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2872" for this suite.
Oct 22 19:06:40.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:06:40.514: INFO: namespace container-runtime-2872 deletion completed in 6.085383966s

• [SLOW TEST:10.249 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:06:40.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Oct 22 19:06:40.600: INFO: Waiting up to 5m0s for pod "pod-91168f03-6f66-4aad-ba3c-bbaa900a196f" in namespace "emptydir-1301" to be "success or failure"
Oct 22 19:06:40.604: INFO: Pod "pod-91168f03-6f66-4aad-ba3c-bbaa900a196f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.670833ms
Oct 22 19:06:42.677: INFO: Pod "pod-91168f03-6f66-4aad-ba3c-bbaa900a196f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076734732s
Oct 22 19:06:44.681: INFO: Pod "pod-91168f03-6f66-4aad-ba3c-bbaa900a196f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080949704s
STEP: Saw pod success
Oct 22 19:06:44.681: INFO: Pod "pod-91168f03-6f66-4aad-ba3c-bbaa900a196f" satisfied condition "success or failure"
Oct 22 19:06:44.684: INFO: Trying to get logs from node iruya-worker pod pod-91168f03-6f66-4aad-ba3c-bbaa900a196f container test-container: 
STEP: delete the pod
Oct 22 19:06:44.744: INFO: Waiting for pod pod-91168f03-6f66-4aad-ba3c-bbaa900a196f to disappear
Oct 22 19:06:44.748: INFO: Pod pod-91168f03-6f66-4aad-ba3c-bbaa900a196f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:06:44.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1301" for this suite.
Oct 22 19:06:50.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:06:50.851: INFO: namespace emptydir-1301 deletion completed in 6.100216246s

• [SLOW TEST:10.336 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:06:50.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Oct 22 19:06:50.959: INFO: Waiting up to 5m0s for pod "downward-api-b223f901-1a11-4e98-ab62-7ad3f3721aab" in namespace "downward-api-4773" to be "success or failure"
Oct 22 19:06:50.998: INFO: Pod "downward-api-b223f901-1a11-4e98-ab62-7ad3f3721aab": Phase="Pending", Reason="", readiness=false. Elapsed: 38.547612ms
Oct 22 19:06:53.002: INFO: Pod "downward-api-b223f901-1a11-4e98-ab62-7ad3f3721aab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043251217s
Oct 22 19:06:55.006: INFO: Pod "downward-api-b223f901-1a11-4e98-ab62-7ad3f3721aab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046936221s
STEP: Saw pod success
Oct 22 19:06:55.006: INFO: Pod "downward-api-b223f901-1a11-4e98-ab62-7ad3f3721aab" satisfied condition "success or failure"
Oct 22 19:06:55.008: INFO: Trying to get logs from node iruya-worker pod downward-api-b223f901-1a11-4e98-ab62-7ad3f3721aab container dapi-container: 
STEP: delete the pod
Oct 22 19:06:55.071: INFO: Waiting for pod downward-api-b223f901-1a11-4e98-ab62-7ad3f3721aab to disappear
Oct 22 19:06:55.234: INFO: Pod downward-api-b223f901-1a11-4e98-ab62-7ad3f3721aab no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:06:55.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4773" for this suite.
Oct 22 19:07:01.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:07:01.340: INFO: namespace downward-api-4773 deletion completed in 6.101919615s

• [SLOW TEST:10.488 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:07:01.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Oct 22 19:07:01.474: INFO: Waiting up to 5m0s for pod "var-expansion-f9dc4d74-ef8f-4249-8206-d40af469ae35" in namespace "var-expansion-269" to be "success or failure"
Oct 22 19:07:01.507: INFO: Pod "var-expansion-f9dc4d74-ef8f-4249-8206-d40af469ae35": Phase="Pending", Reason="", readiness=false. Elapsed: 32.230954ms
Oct 22 19:07:05.132: INFO: Pod "var-expansion-f9dc4d74-ef8f-4249-8206-d40af469ae35": Phase="Pending", Reason="", readiness=false. Elapsed: 3.657555678s
Oct 22 19:07:07.136: INFO: Pod "var-expansion-f9dc4d74-ef8f-4249-8206-d40af469ae35": Phase="Pending", Reason="", readiness=false. Elapsed: 5.66164681s
Oct 22 19:07:09.140: INFO: Pod "var-expansion-f9dc4d74-ef8f-4249-8206-d40af469ae35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.665846718s
STEP: Saw pod success
Oct 22 19:07:09.140: INFO: Pod "var-expansion-f9dc4d74-ef8f-4249-8206-d40af469ae35" satisfied condition "success or failure"
Oct 22 19:07:09.143: INFO: Trying to get logs from node iruya-worker pod var-expansion-f9dc4d74-ef8f-4249-8206-d40af469ae35 container dapi-container: 
STEP: delete the pod
Oct 22 19:07:09.168: INFO: Waiting for pod var-expansion-f9dc4d74-ef8f-4249-8206-d40af469ae35 to disappear
Oct 22 19:07:09.179: INFO: Pod var-expansion-f9dc4d74-ef8f-4249-8206-d40af469ae35 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:07:09.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-269" for this suite.
Oct 22 19:07:15.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:07:15.282: INFO: namespace var-expansion-269 deletion completed in 6.099556283s

• [SLOW TEST:13.941 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:07:15.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 19:07:15.361: INFO: Waiting up to 5m0s for pod "downwardapi-volume-745300e2-a874-4e25-80ac-3dab9404fcd0" in namespace "downward-api-426" to be "success or failure"
Oct 22 19:07:15.365: INFO: Pod "downwardapi-volume-745300e2-a874-4e25-80ac-3dab9404fcd0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.961013ms
Oct 22 19:07:17.369: INFO: Pod "downwardapi-volume-745300e2-a874-4e25-80ac-3dab9404fcd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007327561s
Oct 22 19:07:19.373: INFO: Pod "downwardapi-volume-745300e2-a874-4e25-80ac-3dab9404fcd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011306027s
STEP: Saw pod success
Oct 22 19:07:19.373: INFO: Pod "downwardapi-volume-745300e2-a874-4e25-80ac-3dab9404fcd0" satisfied condition "success or failure"
Oct 22 19:07:19.375: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-745300e2-a874-4e25-80ac-3dab9404fcd0 container client-container: 
STEP: delete the pod
Oct 22 19:07:19.397: INFO: Waiting for pod downwardapi-volume-745300e2-a874-4e25-80ac-3dab9404fcd0 to disappear
Oct 22 19:07:19.401: INFO: Pod downwardapi-volume-745300e2-a874-4e25-80ac-3dab9404fcd0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:07:19.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-426" for this suite.
Oct 22 19:07:25.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:07:25.494: INFO: namespace downward-api-426 deletion completed in 6.089961122s

• [SLOW TEST:10.212 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:07:25.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 19:07:25.619: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"40849a65-55a8-4661-aa30-4b53d16196d2", Controller:(*bool)(0xc0018d71ba), BlockOwnerDeletion:(*bool)(0xc0018d71bb)}}
Oct 22 19:07:25.655: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"315df720-eb08-4ea4-b8d2-4c1ceb64eaef", Controller:(*bool)(0xc002ea060a), BlockOwnerDeletion:(*bool)(0xc002ea060b)}}
Oct 22 19:07:25.664: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"662ba7d4-99e2-465f-81c5-ebe13377202d", Controller:(*bool)(0xc002ea07ba), BlockOwnerDeletion:(*bool)(0xc002ea07bb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:07:30.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7565" for this suite.
Oct 22 19:07:38.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:07:38.851: INFO: namespace gc-7565 deletion completed in 8.136153384s

• [SLOW TEST:13.357 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:07:38.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 22 19:07:38.967: INFO: Waiting up to 5m0s for pod "pod-03acb9cb-00ff-468d-8151-2a0948d7cfe4" in namespace "emptydir-6307" to be "success or failure"
Oct 22 19:07:38.975: INFO: Pod "pod-03acb9cb-00ff-468d-8151-2a0948d7cfe4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.253731ms
Oct 22 19:07:40.979: INFO: Pod "pod-03acb9cb-00ff-468d-8151-2a0948d7cfe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012185964s
Oct 22 19:07:42.983: INFO: Pod "pod-03acb9cb-00ff-468d-8151-2a0948d7cfe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016050515s
STEP: Saw pod success
Oct 22 19:07:42.983: INFO: Pod "pod-03acb9cb-00ff-468d-8151-2a0948d7cfe4" satisfied condition "success or failure"
Oct 22 19:07:42.986: INFO: Trying to get logs from node iruya-worker2 pod pod-03acb9cb-00ff-468d-8151-2a0948d7cfe4 container test-container: 
STEP: delete the pod
Oct 22 19:07:43.134: INFO: Waiting for pod pod-03acb9cb-00ff-468d-8151-2a0948d7cfe4 to disappear
Oct 22 19:07:43.143: INFO: Pod pod-03acb9cb-00ff-468d-8151-2a0948d7cfe4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:07:43.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6307" for this suite.
Oct 22 19:07:49.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:07:49.266: INFO: namespace emptydir-6307 deletion completed in 6.118958692s

• [SLOW TEST:10.414 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:07:49.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:07:53.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2760" for this suite.
Oct 22 19:08:33.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:08:33.473: INFO: namespace kubelet-test-2760 deletion completed in 40.101018102s

• [SLOW TEST:44.207 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:08:33.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-fa5a2835-b626-4fe1-aad7-6f16d1344428 in namespace container-probe-310
Oct 22 19:08:39.556: INFO: Started pod liveness-fa5a2835-b626-4fe1-aad7-6f16d1344428 in namespace container-probe-310
STEP: checking the pod's current state and verifying that restartCount is present
Oct 22 19:08:39.558: INFO: Initial restart count of pod liveness-fa5a2835-b626-4fe1-aad7-6f16d1344428 is 0
Oct 22 19:08:59.673: INFO: Restart count of pod container-probe-310/liveness-fa5a2835-b626-4fe1-aad7-6f16d1344428 is now 1 (20.115210094s elapsed)
Oct 22 19:09:17.713: INFO: Restart count of pod container-probe-310/liveness-fa5a2835-b626-4fe1-aad7-6f16d1344428 is now 2 (38.155417834s elapsed)
Oct 22 19:09:40.600: INFO: Restart count of pod container-probe-310/liveness-fa5a2835-b626-4fe1-aad7-6f16d1344428 is now 3 (1m1.041943611s elapsed)
Oct 22 19:09:58.639: INFO: Restart count of pod container-probe-310/liveness-fa5a2835-b626-4fe1-aad7-6f16d1344428 is now 4 (1m19.080975101s elapsed)
Oct 22 19:11:06.774: INFO: Restart count of pod container-probe-310/liveness-fa5a2835-b626-4fe1-aad7-6f16d1344428 is now 5 (2m27.216123594s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:11:06.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-310" for this suite.
Oct 22 19:11:12.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:11:12.905: INFO: namespace container-probe-310 deletion completed in 6.090677445s

• [SLOW TEST:159.432 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:11:12.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Oct 22 19:11:18.064: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:11:19.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5656" for this suite.
Oct 22 19:11:41.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:11:41.276: INFO: namespace replicaset-5656 deletion completed in 22.164888365s

• [SLOW TEST:28.371 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:11:41.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 19:11:41.353: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d85a7a7-5c8e-44ef-8363-439b0a976cb8" in namespace "downward-api-639" to be "success or failure"
Oct 22 19:11:41.416: INFO: Pod "downwardapi-volume-8d85a7a7-5c8e-44ef-8363-439b0a976cb8": Phase="Pending", Reason="", readiness=false. Elapsed: 63.053266ms
Oct 22 19:11:43.420: INFO: Pod "downwardapi-volume-8d85a7a7-5c8e-44ef-8363-439b0a976cb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067310676s
Oct 22 19:11:45.423: INFO: Pod "downwardapi-volume-8d85a7a7-5c8e-44ef-8363-439b0a976cb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070517424s
STEP: Saw pod success
Oct 22 19:11:45.423: INFO: Pod "downwardapi-volume-8d85a7a7-5c8e-44ef-8363-439b0a976cb8" satisfied condition "success or failure"
Oct 22 19:11:45.425: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8d85a7a7-5c8e-44ef-8363-439b0a976cb8 container client-container: 
STEP: delete the pod
Oct 22 19:11:45.549: INFO: Waiting for pod downwardapi-volume-8d85a7a7-5c8e-44ef-8363-439b0a976cb8 to disappear
Oct 22 19:11:45.583: INFO: Pod downwardapi-volume-8d85a7a7-5c8e-44ef-8363-439b0a976cb8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:11:45.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-639" for this suite.
Oct 22 19:11:51.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:11:51.762: INFO: namespace downward-api-639 deletion completed in 6.175170694s

• [SLOW TEST:10.485 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:11:51.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Oct 22 19:11:51.885: INFO: Waiting up to 5m0s for pod "client-containers-3d493b50-0ec0-45f6-bcb5-cc02657b940f" in namespace "containers-393" to be "success or failure"
Oct 22 19:11:51.895: INFO: Pod "client-containers-3d493b50-0ec0-45f6-bcb5-cc02657b940f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.520693ms
Oct 22 19:11:53.900: INFO: Pod "client-containers-3d493b50-0ec0-45f6-bcb5-cc02657b940f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014247161s
Oct 22 19:11:55.948: INFO: Pod "client-containers-3d493b50-0ec0-45f6-bcb5-cc02657b940f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062518541s
STEP: Saw pod success
Oct 22 19:11:55.948: INFO: Pod "client-containers-3d493b50-0ec0-45f6-bcb5-cc02657b940f" satisfied condition "success or failure"
Oct 22 19:11:55.951: INFO: Trying to get logs from node iruya-worker pod client-containers-3d493b50-0ec0-45f6-bcb5-cc02657b940f container test-container: 
STEP: delete the pod
Oct 22 19:11:55.971: INFO: Waiting for pod client-containers-3d493b50-0ec0-45f6-bcb5-cc02657b940f to disappear
Oct 22 19:11:55.975: INFO: Pod client-containers-3d493b50-0ec0-45f6-bcb5-cc02657b940f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:11:55.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-393" for this suite.
Oct 22 19:12:02.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:12:02.089: INFO: namespace containers-393 deletion completed in 6.108230022s

• [SLOW TEST:10.327 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:12:02.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Oct 22 19:12:02.184: INFO: Waiting up to 5m0s for pod "var-expansion-df90b1bd-4c5c-4a9f-a493-e9f70aa30013" in namespace "var-expansion-5574" to be "success or failure"
Oct 22 19:12:02.197: INFO: Pod "var-expansion-df90b1bd-4c5c-4a9f-a493-e9f70aa30013": Phase="Pending", Reason="", readiness=false. Elapsed: 13.435912ms
Oct 22 19:12:04.201: INFO: Pod "var-expansion-df90b1bd-4c5c-4a9f-a493-e9f70aa30013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017224199s
Oct 22 19:12:06.206: INFO: Pod "var-expansion-df90b1bd-4c5c-4a9f-a493-e9f70aa30013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021904056s
STEP: Saw pod success
Oct 22 19:12:06.206: INFO: Pod "var-expansion-df90b1bd-4c5c-4a9f-a493-e9f70aa30013" satisfied condition "success or failure"
Oct 22 19:12:06.209: INFO: Trying to get logs from node iruya-worker pod var-expansion-df90b1bd-4c5c-4a9f-a493-e9f70aa30013 container dapi-container: 
STEP: delete the pod
Oct 22 19:12:06.260: INFO: Waiting for pod var-expansion-df90b1bd-4c5c-4a9f-a493-e9f70aa30013 to disappear
Oct 22 19:12:06.275: INFO: Pod var-expansion-df90b1bd-4c5c-4a9f-a493-e9f70aa30013 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:12:06.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5574" for this suite.
Oct 22 19:12:12.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:12:12.363: INFO: namespace var-expansion-5574 deletion completed in 6.084240304s

• [SLOW TEST:10.274 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:12:12.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1346
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1346
STEP: Creating statefulset with conflicting port in namespace statefulset-1346
STEP: Waiting until pod test-pod will start running in namespace statefulset-1346
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1346
Oct 22 19:12:16.579: INFO: Observed stateful pod in namespace: statefulset-1346, name: ss-0, uid: 89581964-7445-4730-98b8-dffe1823a2a2, status phase: Failed. Waiting for statefulset controller to delete.
Oct 22 19:12:16.623: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1346
STEP: Removing pod with conflicting port in namespace statefulset-1346
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1346 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Oct 22 19:12:20.720: INFO: Deleting all statefulset in ns statefulset-1346
Oct 22 19:12:20.723: INFO: Scaling statefulset ss to 0
Oct 22 19:12:30.741: INFO: Waiting for statefulset status.replicas updated to 0
Oct 22 19:12:30.745: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:12:30.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1346" for this suite.
Oct 22 19:12:36.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:12:36.897: INFO: namespace statefulset-1346 deletion completed in 6.099013879s

• [SLOW TEST:24.533 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:12:36.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Oct 22 19:12:36.939: INFO: PodSpec: initContainers in spec.initContainers
Oct 22 19:13:23.386: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bd287927-997c-4030-9d0e-c04623a4e77c", GenerateName:"", Namespace:"init-container-9618", SelfLink:"/api/v1/namespaces/init-container-9618/pods/pod-init-bd287927-997c-4030-9d0e-c04623a4e77c", UID:"0d871661-5bf8-4277-a940-b62e6fcd97d6", ResourceVersion:"5310310", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63738990756, loc:(*time.Location)(0x7edea20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"939421041"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-78j5x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002477d40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-78j5x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-78j5x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-78j5x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002ea0148), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0022cf260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002ea01d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002ea01f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002ea01f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002ea01fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738990757, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738990757, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738990757, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738990756, loc:(*time.Location)(0x7edea20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"10.244.2.206", StartTime:(*v1.Time)(0xc00241dba0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00241dbe0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002a76930)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://dc4a59f0061ed3f3c4a0a9e0547286f8b81cd8d27ee175346536a152d768a00e"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00241dc00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00241dbc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:13:23.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9618" for this suite.
Oct 22 19:13:45.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:13:45.606: INFO: namespace init-container-9618 deletion completed in 22.208367456s

• [SLOW TEST:68.709 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:13:45.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:13:53.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9561" for this suite.
Oct 22 19:14:39.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:14:39.484: INFO: namespace kubelet-test-9561 deletion completed in 46.209368371s

• [SLOW TEST:53.878 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:14:39.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-9eddcace-ed2c-4f40-83fd-29bf5b51ac66
STEP: Creating secret with name secret-projected-all-test-volume-59a7aef1-b329-4179-9b00-24837bb0a6c5
STEP: Creating a pod to test Check all projections for projected volume plugin
Oct 22 19:14:39.618: INFO: Waiting up to 5m0s for pod "projected-volume-ba4d6bfc-f46d-4fe5-acc8-fcbbab5a5e46" in namespace "projected-1718" to be "success or failure"
Oct 22 19:14:39.674: INFO: Pod "projected-volume-ba4d6bfc-f46d-4fe5-acc8-fcbbab5a5e46": Phase="Pending", Reason="", readiness=false. Elapsed: 56.270492ms
Oct 22 19:14:41.776: INFO: Pod "projected-volume-ba4d6bfc-f46d-4fe5-acc8-fcbbab5a5e46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158120266s
Oct 22 19:14:43.779: INFO: Pod "projected-volume-ba4d6bfc-f46d-4fe5-acc8-fcbbab5a5e46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161764006s
STEP: Saw pod success
Oct 22 19:14:43.779: INFO: Pod "projected-volume-ba4d6bfc-f46d-4fe5-acc8-fcbbab5a5e46" satisfied condition "success or failure"
Oct 22 19:14:43.781: INFO: Trying to get logs from node iruya-worker pod projected-volume-ba4d6bfc-f46d-4fe5-acc8-fcbbab5a5e46 container projected-all-volume-test: 
STEP: delete the pod
Oct 22 19:14:43.796: INFO: Waiting for pod projected-volume-ba4d6bfc-f46d-4fe5-acc8-fcbbab5a5e46 to disappear
Oct 22 19:14:43.801: INFO: Pod projected-volume-ba4d6bfc-f46d-4fe5-acc8-fcbbab5a5e46 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:14:43.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1718" for this suite.
Oct 22 19:14:49.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:14:49.896: INFO: namespace projected-1718 deletion completed in 6.091095183s

• [SLOW TEST:10.411 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:14:49.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Oct 22 19:14:49.971: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9818,SelfLink:/api/v1/namespaces/watch-9818/configmaps/e2e-watch-test-configmap-a,UID:08886d9b-8a56-491f-9551-55981fe615d5,ResourceVersion:5310546,Generation:0,CreationTimestamp:2020-10-22 19:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Oct 22 19:14:49.972: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9818,SelfLink:/api/v1/namespaces/watch-9818/configmaps/e2e-watch-test-configmap-a,UID:08886d9b-8a56-491f-9551-55981fe615d5,ResourceVersion:5310546,Generation:0,CreationTimestamp:2020-10-22 19:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Oct 22 19:14:59.980: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9818,SelfLink:/api/v1/namespaces/watch-9818/configmaps/e2e-watch-test-configmap-a,UID:08886d9b-8a56-491f-9551-55981fe615d5,ResourceVersion:5310567,Generation:0,CreationTimestamp:2020-10-22 19:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Oct 22 19:14:59.980: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9818,SelfLink:/api/v1/namespaces/watch-9818/configmaps/e2e-watch-test-configmap-a,UID:08886d9b-8a56-491f-9551-55981fe615d5,ResourceVersion:5310567,Generation:0,CreationTimestamp:2020-10-22 19:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Oct 22 19:15:09.988: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9818,SelfLink:/api/v1/namespaces/watch-9818/configmaps/e2e-watch-test-configmap-a,UID:08886d9b-8a56-491f-9551-55981fe615d5,ResourceVersion:5310588,Generation:0,CreationTimestamp:2020-10-22 19:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Oct 22 19:15:09.988: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9818,SelfLink:/api/v1/namespaces/watch-9818/configmaps/e2e-watch-test-configmap-a,UID:08886d9b-8a56-491f-9551-55981fe615d5,ResourceVersion:5310588,Generation:0,CreationTimestamp:2020-10-22 19:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Oct 22 19:15:19.994: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9818,SelfLink:/api/v1/namespaces/watch-9818/configmaps/e2e-watch-test-configmap-a,UID:08886d9b-8a56-491f-9551-55981fe615d5,ResourceVersion:5310609,Generation:0,CreationTimestamp:2020-10-22 19:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Oct 22 19:15:19.994: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9818,SelfLink:/api/v1/namespaces/watch-9818/configmaps/e2e-watch-test-configmap-a,UID:08886d9b-8a56-491f-9551-55981fe615d5,ResourceVersion:5310609,Generation:0,CreationTimestamp:2020-10-22 19:14:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Oct 22 19:15:30.002: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9818,SelfLink:/api/v1/namespaces/watch-9818/configmaps/e2e-watch-test-configmap-b,UID:0b965669-a4d8-4d20-8c38-6d41db28b820,ResourceVersion:5310629,Generation:0,CreationTimestamp:2020-10-22 19:15:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Oct 22 19:15:30.003: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9818,SelfLink:/api/v1/namespaces/watch-9818/configmaps/e2e-watch-test-configmap-b,UID:0b965669-a4d8-4d20-8c38-6d41db28b820,ResourceVersion:5310629,Generation:0,CreationTimestamp:2020-10-22 19:15:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Oct 22 19:15:40.010: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9818,SelfLink:/api/v1/namespaces/watch-9818/configmaps/e2e-watch-test-configmap-b,UID:0b965669-a4d8-4d20-8c38-6d41db28b820,ResourceVersion:5310649,Generation:0,CreationTimestamp:2020-10-22 19:15:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Oct 22 19:15:40.010: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9818,SelfLink:/api/v1/namespaces/watch-9818/configmaps/e2e-watch-test-configmap-b,UID:0b965669-a4d8-4d20-8c38-6d41db28b820,ResourceVersion:5310649,Generation:0,CreationTimestamp:2020-10-22 19:15:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:15:50.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9818" for this suite.
Oct 22 19:15:56.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:15:56.104: INFO: namespace watch-9818 deletion completed in 6.088729114s

• [SLOW TEST:66.208 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:15:56.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Oct 22 19:16:04.234: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct 22 19:16:04.238: INFO: Pod pod-with-prestop-exec-hook still exists
Oct 22 19:16:06.238: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct 22 19:16:06.242: INFO: Pod pod-with-prestop-exec-hook still exists
Oct 22 19:16:08.238: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct 22 19:16:08.242: INFO: Pod pod-with-prestop-exec-hook still exists
Oct 22 19:16:10.238: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct 22 19:16:10.244: INFO: Pod pod-with-prestop-exec-hook still exists
Oct 22 19:16:12.238: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct 22 19:16:12.243: INFO: Pod pod-with-prestop-exec-hook still exists
Oct 22 19:16:14.238: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct 22 19:16:14.242: INFO: Pod pod-with-prestop-exec-hook still exists
Oct 22 19:16:16.238: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct 22 19:16:16.243: INFO: Pod pod-with-prestop-exec-hook still exists
Oct 22 19:16:18.238: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct 22 19:16:18.243: INFO: Pod pod-with-prestop-exec-hook still exists
Oct 22 19:16:20.238: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct 22 19:16:20.242: INFO: Pod pod-with-prestop-exec-hook still exists
Oct 22 19:16:22.238: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct 22 19:16:22.242: INFO: Pod pod-with-prestop-exec-hook still exists
Oct 22 19:16:24.238: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct 22 19:16:24.243: INFO: Pod pod-with-prestop-exec-hook still exists
Oct 22 19:16:26.238: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Oct 22 19:16:26.243: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:16:26.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7016" for this suite.
Oct 22 19:16:48.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:16:48.400: INFO: namespace container-lifecycle-hook-7016 deletion completed in 22.146460754s

• [SLOW TEST:52.296 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:16:48.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 19:16:48.467: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Oct 22 19:16:53.489: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Oct 22 19:16:53.489: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Oct 22 19:16:53.510: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5269,SelfLink:/apis/apps/v1/namespaces/deployment-5269/deployments/test-cleanup-deployment,UID:6fc8fb7b-ae79-419d-bf86-c4ea993da6eb,ResourceVersion:5310854,Generation:1,CreationTimestamp:2020-10-22 19:16:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Oct 22 19:16:53.529: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5269,SelfLink:/apis/apps/v1/namespaces/deployment-5269/replicasets/test-cleanup-deployment-55bbcbc84c,UID:8cfdafd8-2fc6-4f2a-a291-763b7bcb2b73,ResourceVersion:5310856,Generation:1,CreationTimestamp:2020-10-22 19:16:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6fc8fb7b-ae79-419d-bf86-c4ea993da6eb 0xc002ef9567 0xc002ef9568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Oct 22 19:16:53.529: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Oct 22 19:16:53.530: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-5269,SelfLink:/apis/apps/v1/namespaces/deployment-5269/replicasets/test-cleanup-controller,UID:c9a37f55-3737-4bdc-90bf-0c868f41e463,ResourceVersion:5310855,Generation:1,CreationTimestamp:2020-10-22 19:16:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 6fc8fb7b-ae79-419d-bf86-c4ea993da6eb 0xc002ef93b7 0xc002ef93b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Oct 22 19:16:53.565: INFO: Pod "test-cleanup-controller-bnglm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-bnglm,GenerateName:test-cleanup-controller-,Namespace:deployment-5269,SelfLink:/api/v1/namespaces/deployment-5269/pods/test-cleanup-controller-bnglm,UID:3130109b-8503-4a99-b5c2-2a884368f08b,ResourceVersion:5310848,Generation:0,CreationTimestamp:2020-10-22 19:16:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c9a37f55-3737-4bdc-90bf-0c868f41e463 0xc000ff42f7 0xc000ff42f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-h7w2w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h7w2w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-h7w2w true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ff4370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ff4390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:16:48 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:16:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:16:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:16:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.209,StartTime:2020-10-22 19:16:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-10-22 19:16:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://edf8dd46b7bcbbebf8ab419e61e839b3e15bcf46b3036716377f11bd78b95ec7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 19:16:53.566: INFO: Pod "test-cleanup-deployment-55bbcbc84c-sbpvk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-sbpvk,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5269,SelfLink:/api/v1/namespaces/deployment-5269/pods/test-cleanup-deployment-55bbcbc84c-sbpvk,UID:2a8d12de-1675-4ebd-b4b3-e435c51bdb54,ResourceVersion:5310862,Generation:0,CreationTimestamp:2020-10-22 19:16:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 8cfdafd8-2fc6-4f2a-a291-763b7bcb2b73 0xc000ff4477 0xc000ff4478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-h7w2w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h7w2w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-h7w2w true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000ff44f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000ff4510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:16:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:16:53.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5269" for this suite.
Oct 22 19:16:59.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:16:59.703: INFO: namespace deployment-5269 deletion completed in 6.093656809s

• [SLOW TEST:11.303 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:16:59.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Oct 22 19:16:59.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4038'
Oct 22 19:17:02.800: INFO: stderr: ""
Oct 22 19:17:02.800: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct 22 19:17:02.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4038'
Oct 22 19:17:02.906: INFO: stderr: ""
Oct 22 19:17:02.906: INFO: stdout: "update-demo-nautilus-2hgwd update-demo-nautilus-d4wbd "
Oct 22 19:17:02.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2hgwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4038'
Oct 22 19:17:02.987: INFO: stderr: ""
Oct 22 19:17:02.987: INFO: stdout: ""
Oct 22 19:17:02.987: INFO: update-demo-nautilus-2hgwd is created but not running
Oct 22 19:17:07.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4038'
Oct 22 19:17:08.086: INFO: stderr: ""
Oct 22 19:17:08.086: INFO: stdout: "update-demo-nautilus-2hgwd update-demo-nautilus-d4wbd "
Oct 22 19:17:08.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2hgwd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4038'
Oct 22 19:17:08.175: INFO: stderr: ""
Oct 22 19:17:08.175: INFO: stdout: "true"
Oct 22 19:17:08.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2hgwd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4038'
Oct 22 19:17:08.259: INFO: stderr: ""
Oct 22 19:17:08.259: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct 22 19:17:08.259: INFO: validating pod update-demo-nautilus-2hgwd
Oct 22 19:17:08.263: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct 22 19:17:08.263: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 22 19:17:08.263: INFO: update-demo-nautilus-2hgwd is verified up and running
Oct 22 19:17:08.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4wbd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4038'
Oct 22 19:17:08.356: INFO: stderr: ""
Oct 22 19:17:08.356: INFO: stdout: "true"
Oct 22 19:17:08.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4wbd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4038'
Oct 22 19:17:08.449: INFO: stderr: ""
Oct 22 19:17:08.449: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct 22 19:17:08.449: INFO: validating pod update-demo-nautilus-d4wbd
Oct 22 19:17:08.452: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct 22 19:17:08.452: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 22 19:17:08.452: INFO: update-demo-nautilus-d4wbd is verified up and running
STEP: rolling-update to new replication controller
Oct 22 19:17:08.454: INFO: scanned /root for discovery docs: 
Oct 22 19:17:08.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4038'
Oct 22 19:17:31.148: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Oct 22 19:17:31.148: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct 22 19:17:31.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4038'
Oct 22 19:17:31.244: INFO: stderr: ""
Oct 22 19:17:31.244: INFO: stdout: "update-demo-kitten-4bwm4 update-demo-kitten-thd2n "
Oct 22 19:17:31.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4bwm4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4038'
Oct 22 19:17:31.345: INFO: stderr: ""
Oct 22 19:17:31.345: INFO: stdout: "true"
Oct 22 19:17:31.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-4bwm4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4038'
Oct 22 19:17:31.438: INFO: stderr: ""
Oct 22 19:17:31.438: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Oct 22 19:17:31.438: INFO: validating pod update-demo-kitten-4bwm4
Oct 22 19:17:31.442: INFO: got data: {
  "image": "kitten.jpg"
}

Oct 22 19:17:31.442: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Oct 22 19:17:31.442: INFO: update-demo-kitten-4bwm4 is verified up and running
Oct 22 19:17:31.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-thd2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4038'
Oct 22 19:17:31.541: INFO: stderr: ""
Oct 22 19:17:31.541: INFO: stdout: "true"
Oct 22 19:17:31.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-thd2n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4038'
Oct 22 19:17:31.634: INFO: stderr: ""
Oct 22 19:17:31.634: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Oct 22 19:17:31.634: INFO: validating pod update-demo-kitten-thd2n
Oct 22 19:17:31.638: INFO: got data: {
  "image": "kitten.jpg"
}

Oct 22 19:17:31.638: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Oct 22 19:17:31.638: INFO: update-demo-kitten-thd2n is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:17:31.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4038" for this suite.
Oct 22 19:17:55.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:17:55.721: INFO: namespace kubectl-4038 deletion completed in 24.07890581s

• [SLOW TEST:56.018 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:17:55.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-ac8220d7-3cd4-4e96-963b-aa96055c77a2 in namespace container-probe-8918
Oct 22 19:17:59.850: INFO: Started pod busybox-ac8220d7-3cd4-4e96-963b-aa96055c77a2 in namespace container-probe-8918
STEP: checking the pod's current state and verifying that restartCount is present
Oct 22 19:17:59.853: INFO: Initial restart count of pod busybox-ac8220d7-3cd4-4e96-963b-aa96055c77a2 is 0
Oct 22 19:18:47.973: INFO: Restart count of pod container-probe-8918/busybox-ac8220d7-3cd4-4e96-963b-aa96055c77a2 is now 1 (48.11966723s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:18:48.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8918" for this suite.
Oct 22 19:18:54.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:18:54.126: INFO: namespace container-probe-8918 deletion completed in 6.09287424s

• [SLOW TEST:58.405 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:18:54.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 22 19:18:54.177: INFO: Waiting up to 5m0s for pod "pod-552dd8e0-d8c1-430a-99f7-7327ededcdd5" in namespace "emptydir-4524" to be "success or failure"
Oct 22 19:18:54.206: INFO: Pod "pod-552dd8e0-d8c1-430a-99f7-7327ededcdd5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.167668ms
Oct 22 19:18:56.209: INFO: Pod "pod-552dd8e0-d8c1-430a-99f7-7327ededcdd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031997379s
Oct 22 19:18:58.214: INFO: Pod "pod-552dd8e0-d8c1-430a-99f7-7327ededcdd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036392272s
STEP: Saw pod success
Oct 22 19:18:58.214: INFO: Pod "pod-552dd8e0-d8c1-430a-99f7-7327ededcdd5" satisfied condition "success or failure"
Oct 22 19:18:58.216: INFO: Trying to get logs from node iruya-worker2 pod pod-552dd8e0-d8c1-430a-99f7-7327ededcdd5 container test-container: 
STEP: delete the pod
Oct 22 19:18:58.311: INFO: Waiting for pod pod-552dd8e0-d8c1-430a-99f7-7327ededcdd5 to disappear
Oct 22 19:18:58.359: INFO: Pod pod-552dd8e0-d8c1-430a-99f7-7327ededcdd5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:18:58.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4524" for this suite.
Oct 22 19:19:04.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:19:04.458: INFO: namespace emptydir-4524 deletion completed in 6.090215628s

• [SLOW TEST:10.331 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:19:04.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Oct 22 19:19:04.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-31'
Oct 22 19:19:04.802: INFO: stderr: ""
Oct 22 19:19:04.803: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct 22 19:19:04.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-31'
Oct 22 19:19:04.924: INFO: stderr: ""
Oct 22 19:19:04.924: INFO: stdout: "update-demo-nautilus-cbbg7 update-demo-nautilus-htmw6 "
Oct 22 19:19:04.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cbbg7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-31'
Oct 22 19:19:05.016: INFO: stderr: ""
Oct 22 19:19:05.016: INFO: stdout: ""
Oct 22 19:19:05.016: INFO: update-demo-nautilus-cbbg7 is created but not running
Oct 22 19:19:10.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-31'
Oct 22 19:19:10.121: INFO: stderr: ""
Oct 22 19:19:10.121: INFO: stdout: "update-demo-nautilus-cbbg7 update-demo-nautilus-htmw6 "
Oct 22 19:19:10.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cbbg7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-31'
Oct 22 19:19:10.206: INFO: stderr: ""
Oct 22 19:19:10.207: INFO: stdout: "true"
Oct 22 19:19:10.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cbbg7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-31'
Oct 22 19:19:10.298: INFO: stderr: ""
Oct 22 19:19:10.298: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct 22 19:19:10.298: INFO: validating pod update-demo-nautilus-cbbg7
Oct 22 19:19:10.302: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct 22 19:19:10.302: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 22 19:19:10.302: INFO: update-demo-nautilus-cbbg7 is verified up and running
Oct 22 19:19:10.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htmw6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-31'
Oct 22 19:19:10.385: INFO: stderr: ""
Oct 22 19:19:10.386: INFO: stdout: "true"
Oct 22 19:19:10.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-htmw6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-31'
Oct 22 19:19:10.470: INFO: stderr: ""
Oct 22 19:19:10.470: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct 22 19:19:10.470: INFO: validating pod update-demo-nautilus-htmw6
Oct 22 19:19:10.474: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct 22 19:19:10.474: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 22 19:19:10.474: INFO: update-demo-nautilus-htmw6 is verified up and running
STEP: using delete to clean up resources
Oct 22 19:19:10.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-31'
Oct 22 19:19:10.577: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct 22 19:19:10.577: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Oct 22 19:19:10.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-31'
Oct 22 19:19:10.982: INFO: stderr: "No resources found.\n"
Oct 22 19:19:10.982: INFO: stdout: ""
Oct 22 19:19:10.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-31 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 22 19:19:11.497: INFO: stderr: ""
Oct 22 19:19:11.497: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:19:11.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-31" for this suite.
Oct 22 19:19:17.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:19:17.967: INFO: namespace kubectl-31 deletion completed in 6.105323485s

• [SLOW TEST:13.509 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:19:17.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-f643dd3c-86fb-4d41-9077-02ae04d60000
STEP: Creating a pod to test consume secrets
Oct 22 19:19:18.060: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-539fc691-dc06-4382-a4b5-e49032b21f68" in namespace "projected-6651" to be "success or failure"
Oct 22 19:19:18.069: INFO: Pod "pod-projected-secrets-539fc691-dc06-4382-a4b5-e49032b21f68": Phase="Pending", Reason="", readiness=false. Elapsed: 8.843001ms
Oct 22 19:19:20.073: INFO: Pod "pod-projected-secrets-539fc691-dc06-4382-a4b5-e49032b21f68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013239389s
Oct 22 19:19:22.077: INFO: Pod "pod-projected-secrets-539fc691-dc06-4382-a4b5-e49032b21f68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016908721s
STEP: Saw pod success
Oct 22 19:19:22.077: INFO: Pod "pod-projected-secrets-539fc691-dc06-4382-a4b5-e49032b21f68" satisfied condition "success or failure"
Oct 22 19:19:22.079: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-539fc691-dc06-4382-a4b5-e49032b21f68 container projected-secret-volume-test: 
STEP: delete the pod
Oct 22 19:19:22.127: INFO: Waiting for pod pod-projected-secrets-539fc691-dc06-4382-a4b5-e49032b21f68 to disappear
Oct 22 19:19:22.154: INFO: Pod pod-projected-secrets-539fc691-dc06-4382-a4b5-e49032b21f68 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:19:22.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6651" for this suite.
Oct 22 19:19:28.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:19:28.254: INFO: namespace projected-6651 deletion completed in 6.096519946s

• [SLOW TEST:10.287 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:19:28.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7683/configmap-test-668a99be-8bca-44ea-ad52-85916dceba65
STEP: Creating a pod to test consume configMaps
Oct 22 19:19:28.347: INFO: Waiting up to 5m0s for pod "pod-configmaps-a34ab876-c98c-40d7-90aa-a299ce13ea02" in namespace "configmap-7683" to be "success or failure"
Oct 22 19:19:28.357: INFO: Pod "pod-configmaps-a34ab876-c98c-40d7-90aa-a299ce13ea02": Phase="Pending", Reason="", readiness=false. Elapsed: 10.271121ms
Oct 22 19:19:30.361: INFO: Pod "pod-configmaps-a34ab876-c98c-40d7-90aa-a299ce13ea02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013977489s
Oct 22 19:19:32.365: INFO: Pod "pod-configmaps-a34ab876-c98c-40d7-90aa-a299ce13ea02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01794895s
STEP: Saw pod success
Oct 22 19:19:32.365: INFO: Pod "pod-configmaps-a34ab876-c98c-40d7-90aa-a299ce13ea02" satisfied condition "success or failure"
Oct 22 19:19:32.368: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a34ab876-c98c-40d7-90aa-a299ce13ea02 container env-test: 
STEP: delete the pod
Oct 22 19:19:32.618: INFO: Waiting for pod pod-configmaps-a34ab876-c98c-40d7-90aa-a299ce13ea02 to disappear
Oct 22 19:19:32.655: INFO: Pod pod-configmaps-a34ab876-c98c-40d7-90aa-a299ce13ea02 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:19:32.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7683" for this suite.
Oct 22 19:19:40.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:19:40.837: INFO: namespace configmap-7683 deletion completed in 8.178315029s

• [SLOW TEST:12.582 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:19:40.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1022 19:19:52.252140       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Oct 22 19:19:52.252: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:19:52.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4886" for this suite.
Oct 22 19:20:02.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:20:02.574: INFO: namespace gc-4886 deletion completed in 10.160415836s

• [SLOW TEST:21.736 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:20:02.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:20:28.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8543" for this suite.
Oct 22 19:20:34.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:20:34.899: INFO: namespace namespaces-8543 deletion completed in 6.084052798s
STEP: Destroying namespace "nsdeletetest-9398" for this suite.
Oct 22 19:20:34.902: INFO: Namespace nsdeletetest-9398 was already deleted
STEP: Destroying namespace "nsdeletetest-9459" for this suite.
Oct 22 19:20:40.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:20:40.996: INFO: namespace nsdeletetest-9459 deletion completed in 6.094413598s

• [SLOW TEST:38.421 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:20:40.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-105b31fe-f911-4f64-8f7a-7cfc58f6ddc2 in namespace container-probe-5158
Oct 22 19:20:45.105: INFO: Started pod busybox-105b31fe-f911-4f64-8f7a-7cfc58f6ddc2 in namespace container-probe-5158
STEP: checking the pod's current state and verifying that restartCount is present
Oct 22 19:20:45.108: INFO: Initial restart count of pod busybox-105b31fe-f911-4f64-8f7a-7cfc58f6ddc2 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:24:45.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5158" for this suite.
Oct 22 19:24:51.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:24:51.636: INFO: namespace container-probe-5158 deletion completed in 6.181247261s

• [SLOW TEST:250.639 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:24:51.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:24:51.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3082" for this suite.
Oct 22 19:24:57.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:24:57.908: INFO: namespace kubelet-test-3082 deletion completed in 6.090779978s

• [SLOW TEST:6.272 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:24:57.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Oct 22 19:25:06.059: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct 22 19:25:06.064: INFO: Pod pod-with-prestop-http-hook still exists
Oct 22 19:25:08.064: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct 22 19:25:08.069: INFO: Pod pod-with-prestop-http-hook still exists
Oct 22 19:25:10.064: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct 22 19:25:10.068: INFO: Pod pod-with-prestop-http-hook still exists
Oct 22 19:25:12.064: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct 22 19:25:12.068: INFO: Pod pod-with-prestop-http-hook still exists
Oct 22 19:25:14.064: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct 22 19:25:14.069: INFO: Pod pod-with-prestop-http-hook still exists
Oct 22 19:25:16.064: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Oct 22 19:25:16.068: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:25:16.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2255" for this suite.
Oct 22 19:25:38.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:25:38.181: INFO: namespace container-lifecycle-hook-2255 deletion completed in 22.099312151s

• [SLOW TEST:40.273 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:25:38.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-5350180a-5a02-41b0-ac79-9f35aed31b6e
STEP: Creating secret with name s-test-opt-upd-5522109f-6fb7-4b6d-8156-124a6466e85e
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5350180a-5a02-41b0-ac79-9f35aed31b6e
STEP: Updating secret s-test-opt-upd-5522109f-6fb7-4b6d-8156-124a6466e85e
STEP: Creating secret with name s-test-opt-create-246fb509-c86e-4df7-affb-3883400ef8cd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:25:48.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6741" for this suite.
Oct 22 19:26:10.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:26:10.484: INFO: namespace projected-6741 deletion completed in 22.087208942s

• [SLOW TEST:32.303 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:26:10.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-75/secret-test-483ed314-bd60-4bba-995e-3f2b93a71eec
STEP: Creating a pod to test consume secrets
Oct 22 19:26:10.575: INFO: Waiting up to 5m0s for pod "pod-configmaps-1fb6fd8b-8c62-42cd-8f6e-4bc87e009bf6" in namespace "secrets-75" to be "success or failure"
Oct 22 19:26:10.589: INFO: Pod "pod-configmaps-1fb6fd8b-8c62-42cd-8f6e-4bc87e009bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.506941ms
Oct 22 19:26:12.592: INFO: Pod "pod-configmaps-1fb6fd8b-8c62-42cd-8f6e-4bc87e009bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01698127s
Oct 22 19:26:14.596: INFO: Pod "pod-configmaps-1fb6fd8b-8c62-42cd-8f6e-4bc87e009bf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020912764s
STEP: Saw pod success
Oct 22 19:26:14.596: INFO: Pod "pod-configmaps-1fb6fd8b-8c62-42cd-8f6e-4bc87e009bf6" satisfied condition "success or failure"
Oct 22 19:26:14.599: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-1fb6fd8b-8c62-42cd-8f6e-4bc87e009bf6 container env-test: 
STEP: delete the pod
Oct 22 19:26:14.630: INFO: Waiting for pod pod-configmaps-1fb6fd8b-8c62-42cd-8f6e-4bc87e009bf6 to disappear
Oct 22 19:26:14.643: INFO: Pod pod-configmaps-1fb6fd8b-8c62-42cd-8f6e-4bc87e009bf6 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:26:14.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-75" for this suite.
Oct 22 19:26:20.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:26:20.730: INFO: namespace secrets-75 deletion completed in 6.084354724s

• [SLOW TEST:10.246 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:26:20.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1022 19:26:21.876438       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Oct 22 19:26:21.876: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:26:21.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3827" for this suite.
Oct 22 19:26:28.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:26:28.101: INFO: namespace gc-3827 deletion completed in 6.222460289s

• [SLOW TEST:7.371 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:26:28.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Oct 22 19:26:28.248: INFO: Waiting up to 5m0s for pod "client-containers-2fd51f04-fa86-4727-9f67-8fca7f762237" in namespace "containers-9110" to be "success or failure"
Oct 22 19:26:28.301: INFO: Pod "client-containers-2fd51f04-fa86-4727-9f67-8fca7f762237": Phase="Pending", Reason="", readiness=false. Elapsed: 53.532818ms
Oct 22 19:26:30.313: INFO: Pod "client-containers-2fd51f04-fa86-4727-9f67-8fca7f762237": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065275336s
Oct 22 19:26:32.373: INFO: Pod "client-containers-2fd51f04-fa86-4727-9f67-8fca7f762237": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125395242s
STEP: Saw pod success
Oct 22 19:26:32.373: INFO: Pod "client-containers-2fd51f04-fa86-4727-9f67-8fca7f762237" satisfied condition "success or failure"
Oct 22 19:26:32.376: INFO: Trying to get logs from node iruya-worker2 pod client-containers-2fd51f04-fa86-4727-9f67-8fca7f762237 container test-container: 
STEP: delete the pod
Oct 22 19:26:32.421: INFO: Waiting for pod client-containers-2fd51f04-fa86-4727-9f67-8fca7f762237 to disappear
Oct 22 19:26:32.443: INFO: Pod client-containers-2fd51f04-fa86-4727-9f67-8fca7f762237 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:26:32.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9110" for this suite.
Oct 22 19:26:38.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:26:38.566: INFO: namespace containers-9110 deletion completed in 6.119769777s

• [SLOW TEST:10.464 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:26:38.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 19:26:38.673: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20fa8630-4e8b-43e1-9401-764a6e8e91d4" in namespace "projected-1876" to be "success or failure"
Oct 22 19:26:38.689: INFO: Pod "downwardapi-volume-20fa8630-4e8b-43e1-9401-764a6e8e91d4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.244422ms
Oct 22 19:26:40.810: INFO: Pod "downwardapi-volume-20fa8630-4e8b-43e1-9401-764a6e8e91d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137670813s
Oct 22 19:26:42.815: INFO: Pod "downwardapi-volume-20fa8630-4e8b-43e1-9401-764a6e8e91d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142139842s
STEP: Saw pod success
Oct 22 19:26:42.815: INFO: Pod "downwardapi-volume-20fa8630-4e8b-43e1-9401-764a6e8e91d4" satisfied condition "success or failure"
Oct 22 19:26:42.818: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-20fa8630-4e8b-43e1-9401-764a6e8e91d4 container client-container: 
STEP: delete the pod
Oct 22 19:26:42.836: INFO: Waiting for pod downwardapi-volume-20fa8630-4e8b-43e1-9401-764a6e8e91d4 to disappear
Oct 22 19:26:42.860: INFO: Pod downwardapi-volume-20fa8630-4e8b-43e1-9401-764a6e8e91d4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:26:42.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1876" for this suite.
Oct 22 19:26:49.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:26:49.436: INFO: namespace projected-1876 deletion completed in 6.318309863s

• [SLOW TEST:10.870 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:26:49.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-4af06e28-95bb-4f5e-9b5a-17a38939792b
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:26:49.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2158" for this suite.
Oct 22 19:26:55.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:26:55.608: INFO: namespace secrets-2158 deletion completed in 6.112195304s

• [SLOW TEST:6.172 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:26:55.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 19:26:55.684: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf32395d-fec4-46fa-990b-7ebf069be2ef" in namespace "projected-5717" to be "success or failure"
Oct 22 19:26:55.698: INFO: Pod "downwardapi-volume-cf32395d-fec4-46fa-990b-7ebf069be2ef": Phase="Pending", Reason="", readiness=false. Elapsed: 14.251727ms
Oct 22 19:26:57.702: INFO: Pod "downwardapi-volume-cf32395d-fec4-46fa-990b-7ebf069be2ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017865495s
Oct 22 19:26:59.751: INFO: Pod "downwardapi-volume-cf32395d-fec4-46fa-990b-7ebf069be2ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066927437s
Oct 22 19:27:01.755: INFO: Pod "downwardapi-volume-cf32395d-fec4-46fa-990b-7ebf069be2ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07103426s
STEP: Saw pod success
Oct 22 19:27:01.755: INFO: Pod "downwardapi-volume-cf32395d-fec4-46fa-990b-7ebf069be2ef" satisfied condition "success or failure"
Oct 22 19:27:01.758: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cf32395d-fec4-46fa-990b-7ebf069be2ef container client-container: 
STEP: delete the pod
Oct 22 19:27:01.794: INFO: Waiting for pod downwardapi-volume-cf32395d-fec4-46fa-990b-7ebf069be2ef to disappear
Oct 22 19:27:01.797: INFO: Pod downwardapi-volume-cf32395d-fec4-46fa-990b-7ebf069be2ef no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:27:01.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5717" for this suite.
Oct 22 19:27:07.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:27:07.889: INFO: namespace projected-5717 deletion completed in 6.08840219s

• [SLOW TEST:12.281 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:27:07.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9697
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Oct 22 19:27:08.002: INFO: Found 0 stateful pods, waiting for 3
Oct 22 19:27:18.007: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:27:18.007: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:27:18.007: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Oct 22 19:27:28.008: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:27:28.008: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:27:28.008: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Oct 22 19:27:28.036: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Oct 22 19:27:38.078: INFO: Updating stateful set ss2
Oct 22 19:27:38.088: INFO: Waiting for Pod statefulset-9697/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Oct 22 19:27:48.264: INFO: Found 2 stateful pods, waiting for 3
Oct 22 19:27:58.268: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:27:58.268: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:27:58.268: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Oct 22 19:27:58.291: INFO: Updating stateful set ss2
Oct 22 19:27:58.344: INFO: Waiting for Pod statefulset-9697/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Oct 22 19:28:08.404: INFO: Updating stateful set ss2
Oct 22 19:28:08.418: INFO: Waiting for StatefulSet statefulset-9697/ss2 to complete update
Oct 22 19:28:08.418: INFO: Waiting for Pod statefulset-9697/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Oct 22 19:28:18.426: INFO: Waiting for StatefulSet statefulset-9697/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Oct 22 19:28:28.426: INFO: Deleting all statefulset in ns statefulset-9697
Oct 22 19:28:28.430: INFO: Scaling statefulset ss2 to 0
Oct 22 19:28:48.450: INFO: Waiting for statefulset status.replicas updated to 0
Oct 22 19:28:48.453: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:28:48.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9697" for this suite.
Oct 22 19:28:54.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:28:54.592: INFO: namespace statefulset-9697 deletion completed in 6.102786109s

• [SLOW TEST:106.702 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:28:54.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:28:54.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5591" for this suite.
Oct 22 19:29:00.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:29:00.778: INFO: namespace services-5591 deletion completed in 6.088592583s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.185 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:29:00.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1022 19:29:31.382044       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Oct 22 19:29:31.382: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:29:31.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3277" for this suite.
Oct 22 19:29:37.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:29:37.466: INFO: namespace gc-3277 deletion completed in 6.080630721s

• [SLOW TEST:36.688 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:29:37.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-5589e048-f54b-49f0-9808-1f8df268a82e
STEP: Creating a pod to test consume configMaps
Oct 22 19:29:37.775: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-665e131d-f2df-4d71-ac2a-667e19b67f91" in namespace "projected-1514" to be "success or failure"
Oct 22 19:29:37.836: INFO: Pod "pod-projected-configmaps-665e131d-f2df-4d71-ac2a-667e19b67f91": Phase="Pending", Reason="", readiness=false. Elapsed: 60.030384ms
Oct 22 19:29:39.839: INFO: Pod "pod-projected-configmaps-665e131d-f2df-4d71-ac2a-667e19b67f91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063674843s
Oct 22 19:29:41.843: INFO: Pod "pod-projected-configmaps-665e131d-f2df-4d71-ac2a-667e19b67f91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067884829s
STEP: Saw pod success
Oct 22 19:29:41.844: INFO: Pod "pod-projected-configmaps-665e131d-f2df-4d71-ac2a-667e19b67f91" satisfied condition "success or failure"
Oct 22 19:29:41.846: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-665e131d-f2df-4d71-ac2a-667e19b67f91 container projected-configmap-volume-test: 
STEP: delete the pod
Oct 22 19:29:41.946: INFO: Waiting for pod pod-projected-configmaps-665e131d-f2df-4d71-ac2a-667e19b67f91 to disappear
Oct 22 19:29:41.968: INFO: Pod pod-projected-configmaps-665e131d-f2df-4d71-ac2a-667e19b67f91 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:29:41.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1514" for this suite.
Oct 22 19:29:47.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:29:48.048: INFO: namespace projected-1514 deletion completed in 6.077141751s

• [SLOW TEST:10.582 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:29:48.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-8004
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8004 to expose endpoints map[]
Oct 22 19:29:48.192: INFO: Get endpoints failed (25.322376ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Oct 22 19:29:49.196: INFO: successfully validated that service multi-endpoint-test in namespace services-8004 exposes endpoints map[] (1.029227901s elapsed)
STEP: Creating pod pod1 in namespace services-8004
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8004 to expose endpoints map[pod1:[100]]
Oct 22 19:29:53.264: INFO: successfully validated that service multi-endpoint-test in namespace services-8004 exposes endpoints map[pod1:[100]] (4.062106127s elapsed)
STEP: Creating pod pod2 in namespace services-8004
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8004 to expose endpoints map[pod1:[100] pod2:[101]]
Oct 22 19:29:58.314: INFO: successfully validated that service multi-endpoint-test in namespace services-8004 exposes endpoints map[pod1:[100] pod2:[101]] (5.046076207s elapsed)
STEP: Deleting pod pod1 in namespace services-8004
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8004 to expose endpoints map[pod2:[101]]
Oct 22 19:29:59.341: INFO: successfully validated that service multi-endpoint-test in namespace services-8004 exposes endpoints map[pod2:[101]] (1.023209904s elapsed)
STEP: Deleting pod pod2 in namespace services-8004
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8004 to expose endpoints map[]
Oct 22 19:30:00.518: INFO: successfully validated that service multi-endpoint-test in namespace services-8004 exposes endpoints map[] (1.172075989s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:30:00.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8004" for this suite.
Oct 22 19:30:06.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:30:06.865: INFO: namespace services-8004 deletion completed in 6.086648697s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:18.816 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:30:06.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 22 19:30:07.016: INFO: Waiting up to 5m0s for pod "pod-351f7dab-b480-4977-98f6-4de026b8c5be" in namespace "emptydir-6031" to be "success or failure"
Oct 22 19:30:07.019: INFO: Pod "pod-351f7dab-b480-4977-98f6-4de026b8c5be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.938714ms
Oct 22 19:30:09.023: INFO: Pod "pod-351f7dab-b480-4977-98f6-4de026b8c5be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00724171s
Oct 22 19:30:11.151: INFO: Pod "pod-351f7dab-b480-4977-98f6-4de026b8c5be": Phase="Running", Reason="", readiness=true. Elapsed: 4.135453827s
Oct 22 19:30:13.155: INFO: Pod "pod-351f7dab-b480-4977-98f6-4de026b8c5be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.139273183s
STEP: Saw pod success
Oct 22 19:30:13.155: INFO: Pod "pod-351f7dab-b480-4977-98f6-4de026b8c5be" satisfied condition "success or failure"
Oct 22 19:30:13.157: INFO: Trying to get logs from node iruya-worker pod pod-351f7dab-b480-4977-98f6-4de026b8c5be container test-container: 
STEP: delete the pod
Oct 22 19:30:13.176: INFO: Waiting for pod pod-351f7dab-b480-4977-98f6-4de026b8c5be to disappear
Oct 22 19:30:13.180: INFO: Pod pod-351f7dab-b480-4977-98f6-4de026b8c5be no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:30:13.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6031" for this suite.
Oct 22 19:30:19.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:30:19.270: INFO: namespace emptydir-6031 deletion completed in 6.086185883s

• [SLOW TEST:12.405 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:30:19.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Oct 22 19:30:23.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-192803d8-3384-45fa-a0f0-05efc8187430 -c busybox-main-container --namespace=emptydir-5043 -- cat /usr/share/volumeshare/shareddata.txt'
Oct 22 19:30:28.034: INFO: stderr: "I1022 19:30:27.932251    1149 log.go:172] (0xc0009cc2c0) (0xc000620be0) Create stream\nI1022 19:30:27.932293    1149 log.go:172] (0xc0009cc2c0) (0xc000620be0) Stream added, broadcasting: 1\nI1022 19:30:27.934864    1149 log.go:172] (0xc0009cc2c0) Reply frame received for 1\nI1022 19:30:27.934918    1149 log.go:172] (0xc0009cc2c0) (0xc00041c000) Create stream\nI1022 19:30:27.934933    1149 log.go:172] (0xc0009cc2c0) (0xc00041c000) Stream added, broadcasting: 3\nI1022 19:30:27.936173    1149 log.go:172] (0xc0009cc2c0) Reply frame received for 3\nI1022 19:30:27.936254    1149 log.go:172] (0xc0009cc2c0) (0xc0004a4000) Create stream\nI1022 19:30:27.936279    1149 log.go:172] (0xc0009cc2c0) (0xc0004a4000) Stream added, broadcasting: 5\nI1022 19:30:27.937557    1149 log.go:172] (0xc0009cc2c0) Reply frame received for 5\nI1022 19:30:28.024826    1149 log.go:172] (0xc0009cc2c0) Data frame received for 5\nI1022 19:30:28.024981    1149 log.go:172] (0xc0004a4000) (5) Data frame handling\nI1022 19:30:28.025030    1149 log.go:172] (0xc0009cc2c0) Data frame received for 3\nI1022 19:30:28.025076    1149 log.go:172] (0xc00041c000) (3) Data frame handling\nI1022 19:30:28.025107    1149 log.go:172] (0xc00041c000) (3) Data frame sent\nI1022 19:30:28.025125    1149 log.go:172] (0xc0009cc2c0) Data frame received for 3\nI1022 19:30:28.025139    1149 log.go:172] (0xc00041c000) (3) Data frame handling\nI1022 19:30:28.026895    1149 log.go:172] (0xc0009cc2c0) Data frame received for 1\nI1022 19:30:28.026922    1149 log.go:172] (0xc000620be0) (1) Data frame handling\nI1022 19:30:28.026947    1149 log.go:172] (0xc000620be0) (1) Data frame sent\nI1022 19:30:28.026966    1149 log.go:172] (0xc0009cc2c0) (0xc000620be0) Stream removed, broadcasting: 1\nI1022 19:30:28.026991    1149 log.go:172] (0xc0009cc2c0) Go away received\nI1022 19:30:28.027576    1149 log.go:172] (0xc0009cc2c0) (0xc000620be0) Stream removed, broadcasting: 1\nI1022 19:30:28.027602    1149 log.go:172] (0xc0009cc2c0) (0xc00041c000) Stream removed, broadcasting: 3\nI1022 19:30:28.027615    1149 log.go:172] (0xc0009cc2c0) (0xc0004a4000) Stream removed, broadcasting: 5\n"
Oct 22 19:30:28.034: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:30:28.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5043" for this suite.
Oct 22 19:30:34.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:30:34.134: INFO: namespace emptydir-5043 deletion completed in 6.095664183s

• [SLOW TEST:14.864 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:30:34.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2547
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 22 19:30:34.184: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Oct 22 19:31:02.300: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.193 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2547 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:31:02.300: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:31:02.325882       6 log.go:172] (0xc001e5abb0) (0xc000eee960) Create stream
I1022 19:31:02.325913       6 log.go:172] (0xc001e5abb0) (0xc000eee960) Stream added, broadcasting: 1
I1022 19:31:02.328016       6 log.go:172] (0xc001e5abb0) Reply frame received for 1
I1022 19:31:02.328056       6 log.go:172] (0xc001e5abb0) (0xc000eeeaa0) Create stream
I1022 19:31:02.328066       6 log.go:172] (0xc001e5abb0) (0xc000eeeaa0) Stream added, broadcasting: 3
I1022 19:31:02.328953       6 log.go:172] (0xc001e5abb0) Reply frame received for 3
I1022 19:31:02.328987       6 log.go:172] (0xc001e5abb0) (0xc0029286e0) Create stream
I1022 19:31:02.328998       6 log.go:172] (0xc001e5abb0) (0xc0029286e0) Stream added, broadcasting: 5
I1022 19:31:02.329906       6 log.go:172] (0xc001e5abb0) Reply frame received for 5
I1022 19:31:03.436974       6 log.go:172] (0xc001e5abb0) Data frame received for 3
I1022 19:31:03.437010       6 log.go:172] (0xc000eeeaa0) (3) Data frame handling
I1022 19:31:03.437021       6 log.go:172] (0xc000eeeaa0) (3) Data frame sent
I1022 19:31:03.437028       6 log.go:172] (0xc001e5abb0) Data frame received for 3
I1022 19:31:03.437037       6 log.go:172] (0xc000eeeaa0) (3) Data frame handling
I1022 19:31:03.437374       6 log.go:172] (0xc001e5abb0) Data frame received for 5
I1022 19:31:03.437392       6 log.go:172] (0xc0029286e0) (5) Data frame handling
I1022 19:31:03.438955       6 log.go:172] (0xc001e5abb0) Data frame received for 1
I1022 19:31:03.438975       6 log.go:172] (0xc000eee960) (1) Data frame handling
I1022 19:31:03.438986       6 log.go:172] (0xc000eee960) (1) Data frame sent
I1022 19:31:03.438998       6 log.go:172] (0xc001e5abb0) (0xc000eee960) Stream removed, broadcasting: 1
I1022 19:31:03.439011       6 log.go:172] (0xc001e5abb0) Go away received
I1022 19:31:03.439239       6 log.go:172] (0xc001e5abb0) (0xc000eee960) Stream removed, broadcasting: 1
I1022 19:31:03.439250       6 log.go:172] (0xc001e5abb0) (0xc000eeeaa0) Stream removed, broadcasting: 3
I1022 19:31:03.439255       6 log.go:172] (0xc001e5abb0) (0xc0029286e0) Stream removed, broadcasting: 5
Oct 22 19:31:03.439: INFO: Found all expected endpoints: [netserver-0]
Oct 22 19:31:03.441: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.233 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2547 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:31:03.441: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:31:03.481366       6 log.go:172] (0xc001e5bad0) (0xc000eeed20) Create stream
I1022 19:31:03.481398       6 log.go:172] (0xc001e5bad0) (0xc000eeed20) Stream added, broadcasting: 1
I1022 19:31:03.484366       6 log.go:172] (0xc001e5bad0) Reply frame received for 1
I1022 19:31:03.484408       6 log.go:172] (0xc001e5bad0) (0xc002928780) Create stream
I1022 19:31:03.484424       6 log.go:172] (0xc001e5bad0) (0xc002928780) Stream added, broadcasting: 3
I1022 19:31:03.485507       6 log.go:172] (0xc001e5bad0) Reply frame received for 3
I1022 19:31:03.485550       6 log.go:172] (0xc001e5bad0) (0xc00345a1e0) Create stream
I1022 19:31:03.485565       6 log.go:172] (0xc001e5bad0) (0xc00345a1e0) Stream added, broadcasting: 5
I1022 19:31:03.486775       6 log.go:172] (0xc001e5bad0) Reply frame received for 5
I1022 19:31:04.577034       6 log.go:172] (0xc001e5bad0) Data frame received for 3
I1022 19:31:04.577086       6 log.go:172] (0xc002928780) (3) Data frame handling
I1022 19:31:04.577121       6 log.go:172] (0xc002928780) (3) Data frame sent
I1022 19:31:04.577157       6 log.go:172] (0xc001e5bad0) Data frame received for 3
I1022 19:31:04.577188       6 log.go:172] (0xc002928780) (3) Data frame handling
I1022 19:31:04.577370       6 log.go:172] (0xc001e5bad0) Data frame received for 5
I1022 19:31:04.577395       6 log.go:172] (0xc00345a1e0) (5) Data frame handling
I1022 19:31:04.579414       6 log.go:172] (0xc001e5bad0) Data frame received for 1
I1022 19:31:04.579445       6 log.go:172] (0xc000eeed20) (1) Data frame handling
I1022 19:31:04.579474       6 log.go:172] (0xc000eeed20) (1) Data frame sent
I1022 19:31:04.579526       6 log.go:172] (0xc001e5bad0) (0xc000eeed20) Stream removed, broadcasting: 1
I1022 19:31:04.579605       6 log.go:172] (0xc001e5bad0) Go away received
I1022 19:31:04.579718       6 log.go:172] (0xc001e5bad0) (0xc000eeed20) Stream removed, broadcasting: 1
I1022 19:31:04.579750       6 log.go:172] (0xc001e5bad0) (0xc002928780) Stream removed, broadcasting: 3
I1022 19:31:04.579769       6 log.go:172] (0xc001e5bad0) (0xc00345a1e0) Stream removed, broadcasting: 5
Oct 22 19:31:04.579: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:31:04.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2547" for this suite.
Oct 22 19:31:27.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:31:27.496: INFO: namespace pod-network-test-2547 deletion completed in 22.897912845s

• [SLOW TEST:53.361 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:31:27.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Oct 22 19:31:27.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3068'
Oct 22 19:31:27.985: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Oct 22 19:31:27.985: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Oct 22 19:31:28.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-3068'
Oct 22 19:31:28.642: INFO: stderr: ""
Oct 22 19:31:28.642: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:31:28.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3068" for this suite.
Oct 22 19:31:50.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:31:51.019: INFO: namespace kubectl-3068 deletion completed in 22.120831767s

• [SLOW TEST:23.523 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:31:51.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 22 19:31:51.106: INFO: Waiting up to 5m0s for pod "pod-95d10b50-b6dc-4af8-8938-aa4e1aa1e37c" in namespace "emptydir-1051" to be "success or failure"
Oct 22 19:31:51.190: INFO: Pod "pod-95d10b50-b6dc-4af8-8938-aa4e1aa1e37c": Phase="Pending", Reason="", readiness=false. Elapsed: 83.498995ms
Oct 22 19:31:53.194: INFO: Pod "pod-95d10b50-b6dc-4af8-8938-aa4e1aa1e37c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087898864s
Oct 22 19:31:55.198: INFO: Pod "pod-95d10b50-b6dc-4af8-8938-aa4e1aa1e37c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092183266s
STEP: Saw pod success
Oct 22 19:31:55.198: INFO: Pod "pod-95d10b50-b6dc-4af8-8938-aa4e1aa1e37c" satisfied condition "success or failure"
Oct 22 19:31:55.201: INFO: Trying to get logs from node iruya-worker2 pod pod-95d10b50-b6dc-4af8-8938-aa4e1aa1e37c container test-container: 
STEP: delete the pod
Oct 22 19:31:55.285: INFO: Waiting for pod pod-95d10b50-b6dc-4af8-8938-aa4e1aa1e37c to disappear
Oct 22 19:31:55.302: INFO: Pod pod-95d10b50-b6dc-4af8-8938-aa4e1aa1e37c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:31:55.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1051" for this suite.
Oct 22 19:32:01.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:32:01.413: INFO: namespace emptydir-1051 deletion completed in 6.106190756s

• [SLOW TEST:10.393 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:32:01.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-398b6226-52a7-4efb-8011-8b45a3d62136
STEP: Creating a pod to test consume configMaps
Oct 22 19:32:01.504: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eabae7e9-e7e3-4d09-9427-8fd9a15798c7" in namespace "projected-5120" to be "success or failure"
Oct 22 19:32:01.510: INFO: Pod "pod-projected-configmaps-eabae7e9-e7e3-4d09-9427-8fd9a15798c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135796ms
Oct 22 19:32:03.513: INFO: Pod "pod-projected-configmaps-eabae7e9-e7e3-4d09-9427-8fd9a15798c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009275077s
Oct 22 19:32:05.516: INFO: Pod "pod-projected-configmaps-eabae7e9-e7e3-4d09-9427-8fd9a15798c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012171027s
STEP: Saw pod success
Oct 22 19:32:05.516: INFO: Pod "pod-projected-configmaps-eabae7e9-e7e3-4d09-9427-8fd9a15798c7" satisfied condition "success or failure"
Oct 22 19:32:05.519: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-eabae7e9-e7e3-4d09-9427-8fd9a15798c7 container projected-configmap-volume-test: 
STEP: delete the pod
Oct 22 19:32:05.555: INFO: Waiting for pod pod-projected-configmaps-eabae7e9-e7e3-4d09-9427-8fd9a15798c7 to disappear
Oct 22 19:32:05.558: INFO: Pod pod-projected-configmaps-eabae7e9-e7e3-4d09-9427-8fd9a15798c7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:32:05.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5120" for this suite.
Oct 22 19:32:11.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:32:11.681: INFO: namespace projected-5120 deletion completed in 6.119678629s

• [SLOW TEST:10.268 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:32:11.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 22 19:32:11.796: INFO: Waiting up to 5m0s for pod "pod-75297bd8-550f-4e95-aad0-c8bac26123a9" in namespace "emptydir-9142" to be "success or failure"
Oct 22 19:32:11.804: INFO: Pod "pod-75297bd8-550f-4e95-aad0-c8bac26123a9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.342135ms
Oct 22 19:32:13.807: INFO: Pod "pod-75297bd8-550f-4e95-aad0-c8bac26123a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0109054s
Oct 22 19:32:15.855: INFO: Pod "pod-75297bd8-550f-4e95-aad0-c8bac26123a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058434638s
STEP: Saw pod success
Oct 22 19:32:15.855: INFO: Pod "pod-75297bd8-550f-4e95-aad0-c8bac26123a9" satisfied condition "success or failure"
Oct 22 19:32:15.858: INFO: Trying to get logs from node iruya-worker2 pod pod-75297bd8-550f-4e95-aad0-c8bac26123a9 container test-container: 
STEP: delete the pod
Oct 22 19:32:15.933: INFO: Waiting for pod pod-75297bd8-550f-4e95-aad0-c8bac26123a9 to disappear
Oct 22 19:32:16.023: INFO: Pod pod-75297bd8-550f-4e95-aad0-c8bac26123a9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:32:16.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9142" for this suite.
Oct 22 19:32:22.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:32:22.191: INFO: namespace emptydir-9142 deletion completed in 6.165492654s

• [SLOW TEST:10.510 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:32:22.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-596d2cd0-1707-4ab0-8f7b-fbeb1617e9a4
STEP: Creating a pod to test consume secrets
Oct 22 19:32:22.316: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8a033e94-90f1-4ef3-9f58-e47b99e82973" in namespace "projected-2670" to be "success or failure"
Oct 22 19:32:22.418: INFO: Pod "pod-projected-secrets-8a033e94-90f1-4ef3-9f58-e47b99e82973": Phase="Pending", Reason="", readiness=false. Elapsed: 102.094812ms
Oct 22 19:32:24.423: INFO: Pod "pod-projected-secrets-8a033e94-90f1-4ef3-9f58-e47b99e82973": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10645194s
Oct 22 19:32:26.427: INFO: Pod "pod-projected-secrets-8a033e94-90f1-4ef3-9f58-e47b99e82973": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111038146s
STEP: Saw pod success
Oct 22 19:32:26.427: INFO: Pod "pod-projected-secrets-8a033e94-90f1-4ef3-9f58-e47b99e82973" satisfied condition "success or failure"
Oct 22 19:32:26.430: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-8a033e94-90f1-4ef3-9f58-e47b99e82973 container projected-secret-volume-test: 
STEP: delete the pod
Oct 22 19:32:26.499: INFO: Waiting for pod pod-projected-secrets-8a033e94-90f1-4ef3-9f58-e47b99e82973 to disappear
Oct 22 19:32:26.512: INFO: Pod pod-projected-secrets-8a033e94-90f1-4ef3-9f58-e47b99e82973 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:32:26.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2670" for this suite.
Oct 22 19:32:32.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:32:32.616: INFO: namespace projected-2670 deletion completed in 6.100605205s

• [SLOW TEST:10.424 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:32:32.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-0f4c0977-89bd-41ae-a61d-4d5ae096e734
STEP: Creating a pod to test consume configMaps
Oct 22 19:32:32.801: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de668af4-34f3-483b-b46f-c3510728ef6d" in namespace "projected-4965" to be "success or failure"
Oct 22 19:32:32.816: INFO: Pod "pod-projected-configmaps-de668af4-34f3-483b-b46f-c3510728ef6d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.034788ms
Oct 22 19:32:34.821: INFO: Pod "pod-projected-configmaps-de668af4-34f3-483b-b46f-c3510728ef6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019569697s
Oct 22 19:32:36.825: INFO: Pod "pod-projected-configmaps-de668af4-34f3-483b-b46f-c3510728ef6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023424509s
STEP: Saw pod success
Oct 22 19:32:36.825: INFO: Pod "pod-projected-configmaps-de668af4-34f3-483b-b46f-c3510728ef6d" satisfied condition "success or failure"
Oct 22 19:32:36.827: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-de668af4-34f3-483b-b46f-c3510728ef6d container projected-configmap-volume-test: 
STEP: delete the pod
Oct 22 19:32:36.929: INFO: Waiting for pod pod-projected-configmaps-de668af4-34f3-483b-b46f-c3510728ef6d to disappear
Oct 22 19:32:36.991: INFO: Pod pod-projected-configmaps-de668af4-34f3-483b-b46f-c3510728ef6d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:32:36.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4965" for this suite.
Oct 22 19:32:43.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:32:43.118: INFO: namespace projected-4965 deletion completed in 6.123343843s

• [SLOW TEST:10.501 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:32:43.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct 22 19:32:47.951: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:32:48.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8287" for this suite.
Oct 22 19:32:54.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:32:54.178: INFO: namespace container-runtime-8287 deletion completed in 6.135435802s

• [SLOW TEST:11.059 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:32:54.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-aa1e4421-8a5f-42ca-a822-6942d88256f0
STEP: Creating a pod to test consume secrets
Oct 22 19:32:54.293: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-20b9f2a7-5363-4f19-8dbf-4d920d5dd327" in namespace "projected-5611" to be "success or failure"
Oct 22 19:32:54.302: INFO: Pod "pod-projected-secrets-20b9f2a7-5363-4f19-8dbf-4d920d5dd327": Phase="Pending", Reason="", readiness=false. Elapsed: 9.737268ms
Oct 22 19:32:56.306: INFO: Pod "pod-projected-secrets-20b9f2a7-5363-4f19-8dbf-4d920d5dd327": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013176162s
Oct 22 19:32:58.310: INFO: Pod "pod-projected-secrets-20b9f2a7-5363-4f19-8dbf-4d920d5dd327": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017301401s
STEP: Saw pod success
Oct 22 19:32:58.310: INFO: Pod "pod-projected-secrets-20b9f2a7-5363-4f19-8dbf-4d920d5dd327" satisfied condition "success or failure"
Oct 22 19:32:58.313: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-20b9f2a7-5363-4f19-8dbf-4d920d5dd327 container projected-secret-volume-test: 
STEP: delete the pod
Oct 22 19:32:58.333: INFO: Waiting for pod pod-projected-secrets-20b9f2a7-5363-4f19-8dbf-4d920d5dd327 to disappear
Oct 22 19:32:58.351: INFO: Pod pod-projected-secrets-20b9f2a7-5363-4f19-8dbf-4d920d5dd327 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:32:58.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5611" for this suite.
Oct 22 19:33:04.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:33:04.500: INFO: namespace projected-5611 deletion completed in 6.142744826s

• [SLOW TEST:10.321 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:33:04.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Oct 22 19:33:09.103: INFO: Successfully updated pod "annotationupdateedfa2d89-2a0f-47f4-8791-4496481c2922"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:33:13.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8741" for this suite.
Oct 22 19:33:35.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:33:35.223: INFO: namespace downward-api-8741 deletion completed in 22.090274454s

• [SLOW TEST:30.723 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:33:35.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Oct 22 19:33:35.310: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:33:35.326: INFO: Number of nodes with available pods: 0
Oct 22 19:33:35.326: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 19:33:36.347: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:33:36.351: INFO: Number of nodes with available pods: 0
Oct 22 19:33:36.351: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 19:33:37.331: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:33:37.333: INFO: Number of nodes with available pods: 0
Oct 22 19:33:37.333: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 19:33:38.431: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:33:38.490: INFO: Number of nodes with available pods: 0
Oct 22 19:33:38.490: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 19:33:39.331: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:33:39.334: INFO: Number of nodes with available pods: 1
Oct 22 19:33:39.334: INFO: Node iruya-worker2 is running more than one daemon pod
Oct 22 19:33:40.331: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:33:40.334: INFO: Number of nodes with available pods: 2
Oct 22 19:33:40.334: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Oct 22 19:33:40.358: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:33:40.369: INFO: Number of nodes with available pods: 2
Oct 22 19:33:40.369: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1098, will wait for the garbage collector to delete the pods
Oct 22 19:33:41.469: INFO: Deleting DaemonSet.extensions daemon-set took: 7.248211ms
Oct 22 19:33:41.769: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.233249ms
Oct 22 19:33:55.672: INFO: Number of nodes with available pods: 0
Oct 22 19:33:55.672: INFO: Number of running nodes: 0, number of available pods: 0
Oct 22 19:33:55.678: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1098/daemonsets","resourceVersion":"5314440"},"items":null}

Oct 22 19:33:55.681: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1098/pods","resourceVersion":"5314440"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:33:55.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1098" for this suite.
Oct 22 19:34:01.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:34:01.777: INFO: namespace daemonsets-1098 deletion completed in 6.084356235s

• [SLOW TEST:26.553 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:34:01.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Oct 22 19:34:01.838: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix313511429/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:34:01.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5686" for this suite.
Oct 22 19:34:07.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:34:08.016: INFO: namespace kubectl-5686 deletion completed in 6.097336561s

• [SLOW TEST:6.239 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:34:08.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Oct 22 19:34:18.126: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1399 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:34:18.126: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:34:18.156635       6 log.go:172] (0xc001792a50) (0xc002a7e780) Create stream
I1022 19:34:18.156679       6 log.go:172] (0xc001792a50) (0xc002a7e780) Stream added, broadcasting: 1
I1022 19:34:18.161698       6 log.go:172] (0xc001792a50) Reply frame received for 1
I1022 19:34:18.161768       6 log.go:172] (0xc001792a50) (0xc0019e4dc0) Create stream
I1022 19:34:18.161800       6 log.go:172] (0xc001792a50) (0xc0019e4dc0) Stream added, broadcasting: 3
I1022 19:34:18.162682       6 log.go:172] (0xc001792a50) Reply frame received for 3
I1022 19:34:18.162720       6 log.go:172] (0xc001792a50) (0xc0019e4e60) Create stream
I1022 19:34:18.162742       6 log.go:172] (0xc001792a50) (0xc0019e4e60) Stream added, broadcasting: 5
I1022 19:34:18.163521       6 log.go:172] (0xc001792a50) Reply frame received for 5
I1022 19:34:18.253285       6 log.go:172] (0xc001792a50) Data frame received for 3
I1022 19:34:18.253317       6 log.go:172] (0xc0019e4dc0) (3) Data frame handling
I1022 19:34:18.253325       6 log.go:172] (0xc0019e4dc0) (3) Data frame sent
I1022 19:34:18.253331       6 log.go:172] (0xc001792a50) Data frame received for 3
I1022 19:34:18.253335       6 log.go:172] (0xc0019e4dc0) (3) Data frame handling
I1022 19:34:18.253361       6 log.go:172] (0xc001792a50) Data frame received for 5
I1022 19:34:18.253369       6 log.go:172] (0xc0019e4e60) (5) Data frame handling
I1022 19:34:18.254746       6 log.go:172] (0xc001792a50) Data frame received for 1
I1022 19:34:18.254787       6 log.go:172] (0xc002a7e780) (1) Data frame handling
I1022 19:34:18.254820       6 log.go:172] (0xc002a7e780) (1) Data frame sent
I1022 19:34:18.254844       6 log.go:172] (0xc001792a50) (0xc002a7e780) Stream removed, broadcasting: 1
I1022 19:34:18.254875       6 log.go:172] (0xc001792a50) Go away received
I1022 19:34:18.255022       6 log.go:172] (0xc001792a50) (0xc002a7e780) Stream removed, broadcasting: 1
I1022 19:34:18.255060       6 log.go:172] (0xc001792a50) (0xc0019e4dc0) Stream removed, broadcasting: 3
I1022 19:34:18.255083       6 log.go:172] (0xc001792a50) (0xc0019e4e60) Stream removed, broadcasting: 5
Oct 22 19:34:18.255: INFO: Exec stderr: ""
Oct 22 19:34:18.255: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1399 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:34:18.255: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:34:18.292176       6 log.go:172] (0xc0012f68f0) (0xc00173fe00) Create stream
I1022 19:34:18.292230       6 log.go:172] (0xc0012f68f0) (0xc00173fe00) Stream added, broadcasting: 1
I1022 19:34:18.295568       6 log.go:172] (0xc0012f68f0) Reply frame received for 1
I1022 19:34:18.295638       6 log.go:172] (0xc0012f68f0) (0xc002a7e820) Create stream
I1022 19:34:18.295673       6 log.go:172] (0xc0012f68f0) (0xc002a7e820) Stream added, broadcasting: 3
I1022 19:34:18.297256       6 log.go:172] (0xc0012f68f0) Reply frame received for 3
I1022 19:34:18.297301       6 log.go:172] (0xc0012f68f0) (0xc00173ff40) Create stream
I1022 19:34:18.297318       6 log.go:172] (0xc0012f68f0) (0xc00173ff40) Stream added, broadcasting: 5
I1022 19:34:18.298448       6 log.go:172] (0xc0012f68f0) Reply frame received for 5
I1022 19:34:18.360546       6 log.go:172] (0xc0012f68f0) Data frame received for 5
I1022 19:34:18.360582       6 log.go:172] (0xc00173ff40) (5) Data frame handling
I1022 19:34:18.360601       6 log.go:172] (0xc0012f68f0) Data frame received for 3
I1022 19:34:18.360612       6 log.go:172] (0xc002a7e820) (3) Data frame handling
I1022 19:34:18.360625       6 log.go:172] (0xc002a7e820) (3) Data frame sent
I1022 19:34:18.360632       6 log.go:172] (0xc0012f68f0) Data frame received for 3
I1022 19:34:18.360643       6 log.go:172] (0xc002a7e820) (3) Data frame handling
I1022 19:34:18.361887       6 log.go:172] (0xc0012f68f0) Data frame received for 1
I1022 19:34:18.361907       6 log.go:172] (0xc00173fe00) (1) Data frame handling
I1022 19:34:18.361921       6 log.go:172] (0xc00173fe00) (1) Data frame sent
I1022 19:34:18.361933       6 log.go:172] (0xc0012f68f0) (0xc00173fe00) Stream removed, broadcasting: 1
I1022 19:34:18.361997       6 log.go:172] (0xc0012f68f0) Go away received
I1022 19:34:18.362047       6 log.go:172] (0xc0012f68f0) (0xc00173fe00) Stream removed, broadcasting: 1
I1022 19:34:18.362081       6 log.go:172] (0xc0012f68f0) (0xc002a7e820) Stream removed, broadcasting: 3
I1022 19:34:18.362100       6 log.go:172] (0xc0012f68f0) (0xc00173ff40) Stream removed, broadcasting: 5
Oct 22 19:34:18.362: INFO: Exec stderr: ""
Oct 22 19:34:18.362: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1399 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:34:18.362: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:34:18.411152       6 log.go:172] (0xc0012f73f0) (0xc001ade1e0) Create stream
I1022 19:34:18.411182       6 log.go:172] (0xc0012f73f0) (0xc001ade1e0) Stream added, broadcasting: 1
I1022 19:34:18.413625       6 log.go:172] (0xc0012f73f0) Reply frame received for 1
I1022 19:34:18.413678       6 log.go:172] (0xc0012f73f0) (0xc001ade280) Create stream
I1022 19:34:18.413699       6 log.go:172] (0xc0012f73f0) (0xc001ade280) Stream added, broadcasting: 3
I1022 19:34:18.414683       6 log.go:172] (0xc0012f73f0) Reply frame received for 3
I1022 19:34:18.414728       6 log.go:172] (0xc0012f73f0) (0xc001ade3c0) Create stream
I1022 19:34:18.414748       6 log.go:172] (0xc0012f73f0) (0xc001ade3c0) Stream added, broadcasting: 5
I1022 19:34:18.415777       6 log.go:172] (0xc0012f73f0) Reply frame received for 5
I1022 19:34:18.475874       6 log.go:172] (0xc0012f73f0) Data frame received for 5
I1022 19:34:18.475906       6 log.go:172] (0xc001ade3c0) (5) Data frame handling
I1022 19:34:18.475950       6 log.go:172] (0xc0012f73f0) Data frame received for 3
I1022 19:34:18.475989       6 log.go:172] (0xc001ade280) (3) Data frame handling
I1022 19:34:18.476017       6 log.go:172] (0xc001ade280) (3) Data frame sent
I1022 19:34:18.476039       6 log.go:172] (0xc0012f73f0) Data frame received for 3
I1022 19:34:18.476059       6 log.go:172] (0xc001ade280) (3) Data frame handling
I1022 19:34:18.477631       6 log.go:172] (0xc0012f73f0) Data frame received for 1
I1022 19:34:18.477650       6 log.go:172] (0xc001ade1e0) (1) Data frame handling
I1022 19:34:18.477662       6 log.go:172] (0xc001ade1e0) (1) Data frame sent
I1022 19:34:18.477759       6 log.go:172] (0xc0012f73f0) (0xc001ade1e0) Stream removed, broadcasting: 1
I1022 19:34:18.477820       6 log.go:172] (0xc0012f73f0) Go away received
I1022 19:34:18.478121       6 log.go:172] (0xc0012f73f0) (0xc001ade1e0) Stream removed, broadcasting: 1
I1022 19:34:18.478157       6 log.go:172] (0xc0012f73f0) (0xc001ade280) Stream removed, broadcasting: 3
I1022 19:34:18.478182       6 log.go:172] (0xc0012f73f0) (0xc001ade3c0) Stream removed, broadcasting: 5
Oct 22 19:34:18.478: INFO: Exec stderr: ""
Oct 22 19:34:18.478: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1399 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:34:18.478: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:34:18.529715       6 log.go:172] (0xc000ac7ad0) (0xc00177e1e0) Create stream
I1022 19:34:18.529768       6 log.go:172] (0xc000ac7ad0) (0xc00177e1e0) Stream added, broadcasting: 1
I1022 19:34:18.533171       6 log.go:172] (0xc000ac7ad0) Reply frame received for 1
I1022 19:34:18.533213       6 log.go:172] (0xc000ac7ad0) (0xc002a7e8c0) Create stream
I1022 19:34:18.533231       6 log.go:172] (0xc000ac7ad0) (0xc002a7e8c0) Stream added, broadcasting: 3
I1022 19:34:18.534361       6 log.go:172] (0xc000ac7ad0) Reply frame received for 3
I1022 19:34:18.534393       6 log.go:172] (0xc000ac7ad0) (0xc002929400) Create stream
I1022 19:34:18.534410       6 log.go:172] (0xc000ac7ad0) (0xc002929400) Stream added, broadcasting: 5
I1022 19:34:18.535508       6 log.go:172] (0xc000ac7ad0) Reply frame received for 5
I1022 19:34:18.602651       6 log.go:172] (0xc000ac7ad0) Data frame received for 5
I1022 19:34:18.602686       6 log.go:172] (0xc002929400) (5) Data frame handling
I1022 19:34:18.602705       6 log.go:172] (0xc000ac7ad0) Data frame received for 3
I1022 19:34:18.602710       6 log.go:172] (0xc002a7e8c0) (3) Data frame handling
I1022 19:34:18.602717       6 log.go:172] (0xc002a7e8c0) (3) Data frame sent
I1022 19:34:18.602730       6 log.go:172] (0xc000ac7ad0) Data frame received for 3
I1022 19:34:18.602735       6 log.go:172] (0xc002a7e8c0) (3) Data frame handling
I1022 19:34:18.604328       6 log.go:172] (0xc000ac7ad0) Data frame received for 1
I1022 19:34:18.604343       6 log.go:172] (0xc00177e1e0) (1) Data frame handling
I1022 19:34:18.604349       6 log.go:172] (0xc00177e1e0) (1) Data frame sent
I1022 19:34:18.604356       6 log.go:172] (0xc000ac7ad0) (0xc00177e1e0) Stream removed, broadcasting: 1
I1022 19:34:18.604419       6 log.go:172] (0xc000ac7ad0) Go away received
I1022 19:34:18.604447       6 log.go:172] (0xc000ac7ad0) (0xc00177e1e0) Stream removed, broadcasting: 1
I1022 19:34:18.604470       6 log.go:172] (0xc000ac7ad0) (0xc002a7e8c0) Stream removed, broadcasting: 3
I1022 19:34:18.604482       6 log.go:172] (0xc000ac7ad0) (0xc002929400) Stream removed, broadcasting: 5
Oct 22 19:34:18.604: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Oct 22 19:34:18.604: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1399 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:34:18.604: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:34:18.639726       6 log.go:172] (0xc001f9e6e0) (0xc00177edc0) Create stream
I1022 19:34:18.639753       6 log.go:172] (0xc001f9e6e0) (0xc00177edc0) Stream added, broadcasting: 1
I1022 19:34:18.642871       6 log.go:172] (0xc001f9e6e0) Reply frame received for 1
I1022 19:34:18.642914       6 log.go:172] (0xc001f9e6e0) (0xc001ade500) Create stream
I1022 19:34:18.642929       6 log.go:172] (0xc001f9e6e0) (0xc001ade500) Stream added, broadcasting: 3
I1022 19:34:18.644122       6 log.go:172] (0xc001f9e6e0) Reply frame received for 3
I1022 19:34:18.644166       6 log.go:172] (0xc001f9e6e0) (0xc0029294a0) Create stream
I1022 19:34:18.644182       6 log.go:172] (0xc001f9e6e0) (0xc0029294a0) Stream added, broadcasting: 5
I1022 19:34:18.645481       6 log.go:172] (0xc001f9e6e0) Reply frame received for 5
I1022 19:34:18.698105       6 log.go:172] (0xc001f9e6e0) Data frame received for 3
I1022 19:34:18.698163       6 log.go:172] (0xc001ade500) (3) Data frame handling
I1022 19:34:18.698193       6 log.go:172] (0xc001ade500) (3) Data frame sent
I1022 19:34:18.698219       6 log.go:172] (0xc001f9e6e0) Data frame received for 3
I1022 19:34:18.698241       6 log.go:172] (0xc001ade500) (3) Data frame handling
I1022 19:34:18.698265       6 log.go:172] (0xc001f9e6e0) Data frame received for 5
I1022 19:34:18.698283       6 log.go:172] (0xc0029294a0) (5) Data frame handling
I1022 19:34:18.699857       6 log.go:172] (0xc001f9e6e0) Data frame received for 1
I1022 19:34:18.699876       6 log.go:172] (0xc00177edc0) (1) Data frame handling
I1022 19:34:18.699887       6 log.go:172] (0xc00177edc0) (1) Data frame sent
I1022 19:34:18.699905       6 log.go:172] (0xc001f9e6e0) (0xc00177edc0) Stream removed, broadcasting: 1
I1022 19:34:18.699952       6 log.go:172] (0xc001f9e6e0) Go away received
I1022 19:34:18.700121       6 log.go:172] (0xc001f9e6e0) (0xc00177edc0) Stream removed, broadcasting: 1
I1022 19:34:18.700150       6 log.go:172] (0xc001f9e6e0) (0xc001ade500) Stream removed, broadcasting: 3
I1022 19:34:18.700163       6 log.go:172] (0xc001f9e6e0) (0xc0029294a0) Stream removed, broadcasting: 5
Oct 22 19:34:18.700: INFO: Exec stderr: ""
Oct 22 19:34:18.700: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1399 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:34:18.700: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:34:18.750972       6 log.go:172] (0xc001e766e0) (0xc001adea00) Create stream
I1022 19:34:18.751000       6 log.go:172] (0xc001e766e0) (0xc001adea00) Stream added, broadcasting: 1
I1022 19:34:18.753325       6 log.go:172] (0xc001e766e0) Reply frame received for 1
I1022 19:34:18.753358       6 log.go:172] (0xc001e766e0) (0xc002929540) Create stream
I1022 19:34:18.753368       6 log.go:172] (0xc001e766e0) (0xc002929540) Stream added, broadcasting: 3
I1022 19:34:18.754095       6 log.go:172] (0xc001e766e0) Reply frame received for 3
I1022 19:34:18.754122       6 log.go:172] (0xc001e766e0) (0xc002a7e960) Create stream
I1022 19:34:18.754132       6 log.go:172] (0xc001e766e0) (0xc002a7e960) Stream added, broadcasting: 5
I1022 19:34:18.754784       6 log.go:172] (0xc001e766e0) Reply frame received for 5
I1022 19:34:18.816803       6 log.go:172] (0xc001e766e0) Data frame received for 3
I1022 19:34:18.816903       6 log.go:172] (0xc002929540) (3) Data frame handling
I1022 19:34:18.816918       6 log.go:172] (0xc002929540) (3) Data frame sent
I1022 19:34:18.816925       6 log.go:172] (0xc001e766e0) Data frame received for 3
I1022 19:34:18.816933       6 log.go:172] (0xc002929540) (3) Data frame handling
I1022 19:34:18.816949       6 log.go:172] (0xc001e766e0) Data frame received for 5
I1022 19:34:18.816961       6 log.go:172] (0xc002a7e960) (5) Data frame handling
I1022 19:34:18.818607       6 log.go:172] (0xc001e766e0) Data frame received for 1
I1022 19:34:18.818626       6 log.go:172] (0xc001adea00) (1) Data frame handling
I1022 19:34:18.818641       6 log.go:172] (0xc001adea00) (1) Data frame sent
I1022 19:34:18.818654       6 log.go:172] (0xc001e766e0) (0xc001adea00) Stream removed, broadcasting: 1
I1022 19:34:18.818667       6 log.go:172] (0xc001e766e0) Go away received
I1022 19:34:18.818790       6 log.go:172] (0xc001e766e0) (0xc001adea00) Stream removed, broadcasting: 1
I1022 19:34:18.818834       6 log.go:172] (0xc001e766e0) (0xc002929540) Stream removed, broadcasting: 3
I1022 19:34:18.818862       6 log.go:172] (0xc001e766e0) (0xc002a7e960) Stream removed, broadcasting: 5
Oct 22 19:34:18.818: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Oct 22 19:34:18.818: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1399 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:34:18.818: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:34:18.851882       6 log.go:172] (0xc000f0fa20) (0xc002929680) Create stream
I1022 19:34:18.851916       6 log.go:172] (0xc000f0fa20) (0xc002929680) Stream added, broadcasting: 1
I1022 19:34:18.853701       6 log.go:172] (0xc000f0fa20) Reply frame received for 1
I1022 19:34:18.853736       6 log.go:172] (0xc000f0fa20) (0xc0029297c0) Create stream
I1022 19:34:18.853751       6 log.go:172] (0xc000f0fa20) (0xc0029297c0) Stream added, broadcasting: 3
I1022 19:34:18.854312       6 log.go:172] (0xc000f0fa20) Reply frame received for 3
I1022 19:34:18.854334       6 log.go:172] (0xc000f0fa20) (0xc002929900) Create stream
I1022 19:34:18.854342       6 log.go:172] (0xc000f0fa20) (0xc002929900) Stream added, broadcasting: 5
I1022 19:34:18.854923       6 log.go:172] (0xc000f0fa20) Reply frame received for 5
I1022 19:34:18.917926       6 log.go:172] (0xc000f0fa20) Data frame received for 5
I1022 19:34:18.917963       6 log.go:172] (0xc002929900) (5) Data frame handling
I1022 19:34:18.917988       6 log.go:172] (0xc000f0fa20) Data frame received for 3
I1022 19:34:18.918009       6 log.go:172] (0xc0029297c0) (3) Data frame handling
I1022 19:34:18.918027       6 log.go:172] (0xc0029297c0) (3) Data frame sent
I1022 19:34:18.918051       6 log.go:172] (0xc000f0fa20) Data frame received for 3
I1022 19:34:18.918064       6 log.go:172] (0xc0029297c0) (3) Data frame handling
I1022 19:34:18.919558       6 log.go:172] (0xc000f0fa20) Data frame received for 1
I1022 19:34:18.919597       6 log.go:172] (0xc002929680) (1) Data frame handling
I1022 19:34:18.919638       6 log.go:172] (0xc002929680) (1) Data frame sent
I1022 19:34:18.919666       6 log.go:172] (0xc000f0fa20) (0xc002929680) Stream removed, broadcasting: 1
I1022 19:34:18.919699       6 log.go:172] (0xc000f0fa20) Go away received
I1022 19:34:18.919828       6 log.go:172] (0xc000f0fa20) (0xc002929680) Stream removed, broadcasting: 1
I1022 19:34:18.919876       6 log.go:172] (0xc000f0fa20) (0xc0029297c0) Stream removed, broadcasting: 3
I1022 19:34:18.919912       6 log.go:172] (0xc000f0fa20) (0xc002929900) Stream removed, broadcasting: 5
Oct 22 19:34:18.919: INFO: Exec stderr: ""
Oct 22 19:34:18.919: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1399 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:34:18.920: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:34:18.957648       6 log.go:172] (0xc001f9f810) (0xc00177ff40) Create stream
I1022 19:34:18.957678       6 log.go:172] (0xc001f9f810) (0xc00177ff40) Stream added, broadcasting: 1
I1022 19:34:18.960173       6 log.go:172] (0xc001f9f810) Reply frame received for 1
I1022 19:34:18.960212       6 log.go:172] (0xc001f9f810) (0xc0029299a0) Create stream
I1022 19:34:18.960226       6 log.go:172] (0xc001f9f810) (0xc0029299a0) Stream added, broadcasting: 3
I1022 19:34:18.961475       6 log.go:172] (0xc001f9f810) Reply frame received for 3
I1022 19:34:18.961524       6 log.go:172] (0xc001f9f810) (0xc0019e5220) Create stream
I1022 19:34:18.961540       6 log.go:172] (0xc001f9f810) (0xc0019e5220) Stream added, broadcasting: 5
I1022 19:34:18.962618       6 log.go:172] (0xc001f9f810) Reply frame received for 5
I1022 19:34:19.021762       6 log.go:172] (0xc001f9f810) Data frame received for 5
I1022 19:34:19.021789       6 log.go:172] (0xc0019e5220) (5) Data frame handling
I1022 19:34:19.021815       6 log.go:172] (0xc001f9f810) Data frame received for 3
I1022 19:34:19.021824       6 log.go:172] (0xc0029299a0) (3) Data frame handling
I1022 19:34:19.021842       6 log.go:172] (0xc0029299a0) (3) Data frame sent
I1022 19:34:19.021848       6 log.go:172] (0xc001f9f810) Data frame received for 3
I1022 19:34:19.021855       6 log.go:172] (0xc0029299a0) (3) Data frame handling
I1022 19:34:19.023394       6 log.go:172] (0xc001f9f810) Data frame received for 1
I1022 19:34:19.023427       6 log.go:172] (0xc00177ff40) (1) Data frame handling
I1022 19:34:19.023478       6 log.go:172] (0xc00177ff40) (1) Data frame sent
I1022 19:34:19.023515       6 log.go:172] (0xc001f9f810) (0xc00177ff40) Stream removed, broadcasting: 1
I1022 19:34:19.023553       6 log.go:172] (0xc001f9f810) Go away received
I1022 19:34:19.023646       6 log.go:172] (0xc001f9f810) (0xc00177ff40) Stream removed, broadcasting: 1
I1022 19:34:19.023683       6 log.go:172] (0xc001f9f810) (0xc0029299a0) Stream removed, broadcasting: 3
I1022 19:34:19.023694       6 log.go:172] (0xc001f9f810) (0xc0019e5220) Stream removed, broadcasting: 5
Oct 22 19:34:19.023: INFO: Exec stderr: ""
Oct 22 19:34:19.023: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1399 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:34:19.023: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:34:19.057201       6 log.go:172] (0xc0023b6370) (0xc002a7ec80) Create stream
I1022 19:34:19.057235       6 log.go:172] (0xc0023b6370) (0xc002a7ec80) Stream added, broadcasting: 1
I1022 19:34:19.059517       6 log.go:172] (0xc0023b6370) Reply frame received for 1
I1022 19:34:19.059551       6 log.go:172] (0xc0023b6370) (0xc002929a40) Create stream
I1022 19:34:19.059562       6 log.go:172] (0xc0023b6370) (0xc002929a40) Stream added, broadcasting: 3
I1022 19:34:19.060335       6 log.go:172] (0xc0023b6370) Reply frame received for 3
I1022 19:34:19.060363       6 log.go:172] (0xc0023b6370) (0xc0019e52c0) Create stream
I1022 19:34:19.060373       6 log.go:172] (0xc0023b6370) (0xc0019e52c0) Stream added, broadcasting: 5
I1022 19:34:19.061120       6 log.go:172] (0xc0023b6370) Reply frame received for 5
I1022 19:34:19.132768       6 log.go:172] (0xc0023b6370) Data frame received for 3
I1022 19:34:19.132804       6 log.go:172] (0xc002929a40) (3) Data frame handling
I1022 19:34:19.132821       6 log.go:172] (0xc002929a40) (3) Data frame sent
I1022 19:34:19.132828       6 log.go:172] (0xc0023b6370) Data frame received for 3
I1022 19:34:19.132915       6 log.go:172] (0xc0023b6370) Data frame received for 5
I1022 19:34:19.132934       6 log.go:172] (0xc0019e52c0) (5) Data frame handling
I1022 19:34:19.132972       6 log.go:172] (0xc002929a40) (3) Data frame handling
I1022 19:34:19.134347       6 log.go:172] (0xc0023b6370) Data frame received for 1
I1022 19:34:19.134378       6 log.go:172] (0xc002a7ec80) (1) Data frame handling
I1022 19:34:19.134408       6 log.go:172] (0xc002a7ec80) (1) Data frame sent
I1022 19:34:19.134439       6 log.go:172] (0xc0023b6370) (0xc002a7ec80) Stream removed, broadcasting: 1
I1022 19:34:19.134464       6 log.go:172] (0xc0023b6370) Go away received
I1022 19:34:19.134524       6 log.go:172] (0xc0023b6370) (0xc002a7ec80) Stream removed, broadcasting: 1
I1022 19:34:19.134540       6 log.go:172] (0xc0023b6370) (0xc002929a40) Stream removed, broadcasting: 3
I1022 19:34:19.134554       6 log.go:172] (0xc0023b6370) (0xc0019e52c0) Stream removed, broadcasting: 5
Oct 22 19:34:19.134: INFO: Exec stderr: ""
Oct 22 19:34:19.134: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1399 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:34:19.134: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:34:19.162053       6 log.go:172] (0xc0023b6bb0) (0xc002a7f040) Create stream
I1022 19:34:19.162081       6 log.go:172] (0xc0023b6bb0) (0xc002a7f040) Stream added, broadcasting: 1
I1022 19:34:19.164670       6 log.go:172] (0xc0023b6bb0) Reply frame received for 1
I1022 19:34:19.164695       6 log.go:172] (0xc0023b6bb0) (0xc002a7f0e0) Create stream
I1022 19:34:19.164703       6 log.go:172] (0xc0023b6bb0) (0xc002a7f0e0) Stream added, broadcasting: 3
I1022 19:34:19.165979       6 log.go:172] (0xc0023b6bb0) Reply frame received for 3
I1022 19:34:19.166017       6 log.go:172] (0xc0023b6bb0) (0xc002a7f180) Create stream
I1022 19:34:19.166029       6 log.go:172] (0xc0023b6bb0) (0xc002a7f180) Stream added, broadcasting: 5
I1022 19:34:19.166963       6 log.go:172] (0xc0023b6bb0) Reply frame received for 5
I1022 19:34:19.239381       6 log.go:172] (0xc0023b6bb0) Data frame received for 5
I1022 19:34:19.239425       6 log.go:172] (0xc002a7f180) (5) Data frame handling
I1022 19:34:19.239470       6 log.go:172] (0xc0023b6bb0) Data frame received for 3
I1022 19:34:19.239507       6 log.go:172] (0xc002a7f0e0) (3) Data frame handling
I1022 19:34:19.239549       6 log.go:172] (0xc002a7f0e0) (3) Data frame sent
I1022 19:34:19.239582       6 log.go:172] (0xc0023b6bb0) Data frame received for 3
I1022 19:34:19.239613       6 log.go:172] (0xc002a7f0e0) (3) Data frame handling
I1022 19:34:19.241131       6 log.go:172] (0xc0023b6bb0) Data frame received for 1
I1022 19:34:19.241163       6 log.go:172] (0xc002a7f040) (1) Data frame handling
I1022 19:34:19.241202       6 log.go:172] (0xc002a7f040) (1) Data frame sent
I1022 19:34:19.241225       6 log.go:172] (0xc0023b6bb0) (0xc002a7f040) Stream removed, broadcasting: 1
I1022 19:34:19.241358       6 log.go:172] (0xc0023b6bb0) (0xc002a7f040) Stream removed, broadcasting: 1
I1022 19:34:19.241431       6 log.go:172] (0xc0023b6bb0) (0xc002a7f0e0) Stream removed, broadcasting: 3
I1022 19:34:19.241484       6 log.go:172] (0xc0023b6bb0) (0xc002a7f180) Stream removed, broadcasting: 5
Oct 22 19:34:19.241: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
I1022 19:34:19.241554       6 log.go:172] (0xc0023b6bb0) Go away received
Oct 22 19:34:19.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1399" for this suite.
Oct 22 19:35:09.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:35:09.339: INFO: namespace e2e-kubelet-etc-hosts-1399 deletion completed in 50.093477083s

• [SLOW TEST:61.323 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:35:09.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Oct 22 19:35:13.966: INFO: Successfully updated pod "annotationupdate99a91d12-feec-417f-a64e-ef62538a77af"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:35:16.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3517" for this suite.
Oct 22 19:35:38.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:35:38.103: INFO: namespace projected-3517 deletion completed in 22.081040385s

• [SLOW TEST:28.763 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:35:38.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-dc20d14e-7a6f-4741-a883-dbdd553859d3 in namespace container-probe-7863
Oct 22 19:35:42.205: INFO: Started pod liveness-dc20d14e-7a6f-4741-a883-dbdd553859d3 in namespace container-probe-7863
STEP: checking the pod's current state and verifying that restartCount is present
Oct 22 19:35:42.208: INFO: Initial restart count of pod liveness-dc20d14e-7a6f-4741-a883-dbdd553859d3 is 0
Oct 22 19:36:06.258: INFO: Restart count of pod container-probe-7863/liveness-dc20d14e-7a6f-4741-a883-dbdd553859d3 is now 1 (24.05075775s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:36:06.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7863" for this suite.
Oct 22 19:36:12.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:36:12.424: INFO: namespace container-probe-7863 deletion completed in 6.118222087s

• [SLOW TEST:34.321 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:36:12.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Oct 22 19:36:12.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1692'
Oct 22 19:36:12.572: INFO: stderr: ""
Oct 22 19:36:12.572: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Oct 22 19:36:12.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1692'
Oct 22 19:36:18.449: INFO: stderr: ""
Oct 22 19:36:18.449: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:36:18.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1692" for this suite.
Oct 22 19:36:24.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:36:24.575: INFO: namespace kubectl-1692 deletion completed in 6.089789686s

• [SLOW TEST:12.150 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:36:24.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Oct 22 19:36:24.716: INFO: Waiting up to 5m0s for pod "client-containers-abe33036-9821-43e5-9a92-bb234c442e48" in namespace "containers-1870" to be "success or failure"
Oct 22 19:36:24.754: INFO: Pod "client-containers-abe33036-9821-43e5-9a92-bb234c442e48": Phase="Pending", Reason="", readiness=false. Elapsed: 37.713487ms
Oct 22 19:36:26.757: INFO: Pod "client-containers-abe33036-9821-43e5-9a92-bb234c442e48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041475904s
Oct 22 19:36:28.761: INFO: Pod "client-containers-abe33036-9821-43e5-9a92-bb234c442e48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045437428s
STEP: Saw pod success
Oct 22 19:36:28.761: INFO: Pod "client-containers-abe33036-9821-43e5-9a92-bb234c442e48" satisfied condition "success or failure"
Oct 22 19:36:28.764: INFO: Trying to get logs from node iruya-worker pod client-containers-abe33036-9821-43e5-9a92-bb234c442e48 container test-container: 
STEP: delete the pod
Oct 22 19:36:28.836: INFO: Waiting for pod client-containers-abe33036-9821-43e5-9a92-bb234c442e48 to disappear
Oct 22 19:36:28.844: INFO: Pod client-containers-abe33036-9821-43e5-9a92-bb234c442e48 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:36:28.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1870" for this suite.
Oct 22 19:36:34.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:36:34.933: INFO: namespace containers-1870 deletion completed in 6.086036061s

• [SLOW TEST:10.357 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:36:34.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 19:36:58.439: INFO: Container started at 2020-10-22 19:36:39 +0000 UTC, pod became ready at 2020-10-22 19:36:58 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:36:58.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7777" for this suite.
Oct 22 19:37:22.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:37:22.543: INFO: namespace container-probe-7777 deletion completed in 24.100605733s

• [SLOW TEST:47.610 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:37:22.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Oct 22 19:37:22.587: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:37:30.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5426" for this suite.
Oct 22 19:37:36.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:37:36.462: INFO: namespace init-container-5426 deletion completed in 6.096019938s

• [SLOW TEST:13.918 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:37:36.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 19:37:36.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-093d9865-f4d0-44f5-8e40-2da18034392e" in namespace "downward-api-2187" to be "success or failure"
Oct 22 19:37:36.528: INFO: Pod "downwardapi-volume-093d9865-f4d0-44f5-8e40-2da18034392e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.428898ms
Oct 22 19:37:38.532: INFO: Pod "downwardapi-volume-093d9865-f4d0-44f5-8e40-2da18034392e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007248799s
Oct 22 19:37:40.537: INFO: Pod "downwardapi-volume-093d9865-f4d0-44f5-8e40-2da18034392e": Phase="Running", Reason="", readiness=true. Elapsed: 4.011682797s
Oct 22 19:37:42.541: INFO: Pod "downwardapi-volume-093d9865-f4d0-44f5-8e40-2da18034392e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016131934s
STEP: Saw pod success
Oct 22 19:37:42.541: INFO: Pod "downwardapi-volume-093d9865-f4d0-44f5-8e40-2da18034392e" satisfied condition "success or failure"
Oct 22 19:37:42.544: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-093d9865-f4d0-44f5-8e40-2da18034392e container client-container: 
STEP: delete the pod
Oct 22 19:37:42.571: INFO: Waiting for pod downwardapi-volume-093d9865-f4d0-44f5-8e40-2da18034392e to disappear
Oct 22 19:37:42.582: INFO: Pod downwardapi-volume-093d9865-f4d0-44f5-8e40-2da18034392e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:37:42.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2187" for this suite.
Oct 22 19:37:48.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:37:48.677: INFO: namespace downward-api-2187 deletion completed in 6.091170115s

• [SLOW TEST:12.215 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:37:48.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5689
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-5689
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5689
Oct 22 19:37:48.762: INFO: Found 0 stateful pods, waiting for 1
Oct 22 19:37:59.437: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Oct 22 19:37:59.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Oct 22 19:37:59.766: INFO: stderr: "I1022 19:37:59.603810    1278 log.go:172] (0xc0009f2580) (0xc0005b0820) Create stream\nI1022 19:37:59.603882    1278 log.go:172] (0xc0009f2580) (0xc0005b0820) Stream added, broadcasting: 1\nI1022 19:37:59.608302    1278 log.go:172] (0xc0009f2580) Reply frame received for 1\nI1022 19:37:59.608349    1278 log.go:172] (0xc0009f2580) (0xc0005b0000) Create stream\nI1022 19:37:59.608357    1278 log.go:172] (0xc0009f2580) (0xc0005b0000) Stream added, broadcasting: 3\nI1022 19:37:59.609284    1278 log.go:172] (0xc0009f2580) Reply frame received for 3\nI1022 19:37:59.609333    1278 log.go:172] (0xc0009f2580) (0xc0005a6140) Create stream\nI1022 19:37:59.609352    1278 log.go:172] (0xc0009f2580) (0xc0005a6140) Stream added, broadcasting: 5\nI1022 19:37:59.610115    1278 log.go:172] (0xc0009f2580) Reply frame received for 5\nI1022 19:37:59.697116    1278 log.go:172] (0xc0009f2580) Data frame received for 5\nI1022 19:37:59.697166    1278 log.go:172] (0xc0005a6140) (5) Data frame handling\nI1022 19:37:59.697199    1278 log.go:172] (0xc0005a6140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1022 19:37:59.757229    1278 log.go:172] (0xc0009f2580) Data frame received for 3\nI1022 19:37:59.757290    1278 log.go:172] (0xc0005b0000) (3) Data frame handling\nI1022 19:37:59.757311    1278 log.go:172] (0xc0005b0000) (3) Data frame sent\nI1022 19:37:59.757325    1278 log.go:172] (0xc0009f2580) Data frame received for 3\nI1022 19:37:59.757335    1278 log.go:172] (0xc0005b0000) (3) Data frame handling\nI1022 19:37:59.757363    1278 log.go:172] (0xc0009f2580) Data frame received for 5\nI1022 19:37:59.757385    1278 log.go:172] (0xc0005a6140) (5) Data frame handling\nI1022 19:37:59.759358    1278 log.go:172] (0xc0009f2580) Data frame received for 1\nI1022 19:37:59.759375    1278 log.go:172] (0xc0005b0820) (1) Data frame handling\nI1022 19:37:59.759390    1278 log.go:172] (0xc0005b0820) (1) Data frame sent\nI1022 19:37:59.759470    1278 log.go:172] (0xc0009f2580) (0xc0005b0820) Stream removed, broadcasting: 1\nI1022 19:37:59.759496    1278 log.go:172] (0xc0009f2580) Go away received\nI1022 19:37:59.759935    1278 log.go:172] (0xc0009f2580) (0xc0005b0820) Stream removed, broadcasting: 1\nI1022 19:37:59.759967    1278 log.go:172] (0xc0009f2580) (0xc0005b0000) Stream removed, broadcasting: 3\nI1022 19:37:59.759980    1278 log.go:172] (0xc0009f2580) (0xc0005a6140) Stream removed, broadcasting: 5\n"
Oct 22 19:37:59.766: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Oct 22 19:37:59.766: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Oct 22 19:37:59.771: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Oct 22 19:38:09.775: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Oct 22 19:38:09.775: INFO: Waiting for statefulset status.replicas updated to 0
Oct 22 19:38:09.791: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Oct 22 19:38:09.791: INFO: ss-0  iruya-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:37:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:37:48 +0000 UTC  }]
Oct 22 19:38:09.791: INFO: 
Oct 22 19:38:09.791: INFO: StatefulSet ss has not reached scale 3, at 1
Oct 22 19:38:10.795: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991886799s
Oct 22 19:38:11.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987777128s
Oct 22 19:38:13.003: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984251802s
Oct 22 19:38:14.009: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.780207028s
Oct 22 19:38:15.031: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.774250269s
Oct 22 19:38:16.035: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.751665431s
Oct 22 19:38:17.041: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.747682487s
Oct 22 19:38:18.046: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.742284639s
Oct 22 19:38:19.051: INFO: Verifying statefulset ss doesn't scale past 3 for another 736.980622ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5689
Oct 22 19:38:20.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:38:20.253: INFO: stderr: "I1022 19:38:20.184582    1299 log.go:172] (0xc00079c420) (0xc0003e0820) Create stream\nI1022 19:38:20.184639    1299 log.go:172] (0xc00079c420) (0xc0003e0820) Stream added, broadcasting: 1\nI1022 19:38:20.187672    1299 log.go:172] (0xc00079c420) Reply frame received for 1\nI1022 19:38:20.187716    1299 log.go:172] (0xc00079c420) (0xc0003e0000) Create stream\nI1022 19:38:20.187729    1299 log.go:172] (0xc00079c420) (0xc0003e0000) Stream added, broadcasting: 3\nI1022 19:38:20.188538    1299 log.go:172] (0xc00079c420) Reply frame received for 3\nI1022 19:38:20.188592    1299 log.go:172] (0xc00079c420) (0xc0005fc1e0) Create stream\nI1022 19:38:20.188609    1299 log.go:172] (0xc00079c420) (0xc0005fc1e0) Stream added, broadcasting: 5\nI1022 19:38:20.189750    1299 log.go:172] (0xc00079c420) Reply frame received for 5\nI1022 19:38:20.245456    1299 log.go:172] (0xc00079c420) Data frame received for 5\nI1022 19:38:20.245482    1299 log.go:172] (0xc0005fc1e0) (5) Data frame handling\nI1022 19:38:20.245489    1299 log.go:172] (0xc0005fc1e0) (5) Data frame sent\nI1022 19:38:20.245495    1299 log.go:172] (0xc00079c420) Data frame received for 5\nI1022 19:38:20.245500    1299 log.go:172] (0xc0005fc1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1022 19:38:20.245527    1299 log.go:172] (0xc00079c420) Data frame received for 3\nI1022 19:38:20.245535    1299 log.go:172] (0xc0003e0000) (3) Data frame handling\nI1022 19:38:20.245544    1299 log.go:172] (0xc0003e0000) (3) Data frame sent\nI1022 19:38:20.245556    1299 log.go:172] (0xc00079c420) Data frame received for 3\nI1022 19:38:20.245567    1299 log.go:172] (0xc0003e0000) (3) Data frame handling\nI1022 19:38:20.247189    1299 log.go:172] (0xc00079c420) Data frame received for 1\nI1022 19:38:20.247216    1299 log.go:172] (0xc0003e0820) (1) Data frame handling\nI1022 19:38:20.247242    1299 log.go:172] (0xc0003e0820) (1) Data frame sent\nI1022 19:38:20.247260    1299 log.go:172] (0xc00079c420) (0xc0003e0820) Stream removed, broadcasting: 1\nI1022 19:38:20.247281    1299 log.go:172] (0xc00079c420) Go away received\nI1022 19:38:20.247540    1299 log.go:172] (0xc00079c420) (0xc0003e0820) Stream removed, broadcasting: 1\nI1022 19:38:20.247554    1299 log.go:172] (0xc00079c420) (0xc0003e0000) Stream removed, broadcasting: 3\nI1022 19:38:20.247559    1299 log.go:172] (0xc00079c420) (0xc0005fc1e0) Stream removed, broadcasting: 5\n"
Oct 22 19:38:20.253: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Oct 22 19:38:20.253: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Oct 22 19:38:20.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:38:20.468: INFO: stderr: "I1022 19:38:20.375944    1319 log.go:172] (0xc000852420) (0xc000444820) Create stream\nI1022 19:38:20.375992    1319 log.go:172] (0xc000852420) (0xc000444820) Stream added, broadcasting: 1\nI1022 19:38:20.377943    1319 log.go:172] (0xc000852420) Reply frame received for 1\nI1022 19:38:20.377976    1319 log.go:172] (0xc000852420) (0xc0005c2280) Create stream\nI1022 19:38:20.377984    1319 log.go:172] (0xc000852420) (0xc0005c2280) Stream added, broadcasting: 3\nI1022 19:38:20.378887    1319 log.go:172] (0xc000852420) Reply frame received for 3\nI1022 19:38:20.378944    1319 log.go:172] (0xc000852420) (0xc0004448c0) Create stream\nI1022 19:38:20.378967    1319 log.go:172] (0xc000852420) (0xc0004448c0) Stream added, broadcasting: 5\nI1022 19:38:20.379795    1319 log.go:172] (0xc000852420) Reply frame received for 5\nI1022 19:38:20.449930    1319 log.go:172] (0xc000852420) Data frame received for 5\nI1022 19:38:20.449971    1319 log.go:172] (0xc0004448c0) (5) Data frame handling\nI1022 19:38:20.449991    1319 log.go:172] (0xc0004448c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1022 19:38:20.459420    1319 log.go:172] (0xc000852420) Data frame received for 5\nI1022 19:38:20.459447    1319 log.go:172] (0xc0004448c0) (5) Data frame handling\nI1022 19:38:20.459463    1319 log.go:172] (0xc0004448c0) (5) Data frame sent\nI1022 19:38:20.459470    1319 log.go:172] (0xc000852420) Data frame received for 5\nI1022 19:38:20.459476    1319 log.go:172] (0xc0004448c0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1022 19:38:20.459509    1319 log.go:172] (0xc000852420) Data frame received for 3\nI1022 19:38:20.459545    1319 log.go:172] (0xc0005c2280) (3) Data frame handling\nI1022 19:38:20.459559    1319 log.go:172] (0xc0005c2280) (3) Data frame sent\nI1022 19:38:20.459594    1319 log.go:172] (0xc0004448c0) (5) Data frame sent\nI1022 19:38:20.459795    1319 log.go:172] (0xc000852420) Data frame received for 5\nI1022 19:38:20.459830    1319 log.go:172] (0xc0004448c0) (5) Data frame handling\nI1022 19:38:20.459859    1319 log.go:172] (0xc000852420) Data frame received for 3\nI1022 19:38:20.459882    1319 log.go:172] (0xc0005c2280) (3) Data frame handling\nI1022 19:38:20.461799    1319 log.go:172] (0xc000852420) Data frame received for 1\nI1022 19:38:20.461897    1319 log.go:172] (0xc000444820) (1) Data frame handling\nI1022 19:38:20.461955    1319 log.go:172] (0xc000444820) (1) Data frame sent\nI1022 19:38:20.461988    1319 log.go:172] (0xc000852420) (0xc000444820) Stream removed, broadcasting: 1\nI1022 19:38:20.462033    1319 log.go:172] (0xc000852420) Go away received\nI1022 19:38:20.462415    1319 log.go:172] (0xc000852420) (0xc000444820) Stream removed, broadcasting: 1\nI1022 19:38:20.462439    1319 log.go:172] (0xc000852420) (0xc0005c2280) Stream removed, broadcasting: 3\nI1022 19:38:20.462452    1319 log.go:172] (0xc000852420) (0xc0004448c0) Stream removed, broadcasting: 5\n"
Oct 22 19:38:20.469: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Oct 22 19:38:20.469: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Oct 22 19:38:20.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:38:20.687: INFO: stderr: "I1022 19:38:20.601675    1339 log.go:172] (0xc000a68370) (0xc0005c6780) Create stream\nI1022 19:38:20.601726    1339 log.go:172] (0xc000a68370) (0xc0005c6780) Stream added, broadcasting: 1\nI1022 19:38:20.604225    1339 log.go:172] (0xc000a68370) Reply frame received for 1\nI1022 19:38:20.604289    1339 log.go:172] (0xc000a68370) (0xc000970000) Create stream\nI1022 19:38:20.604312    1339 log.go:172] (0xc000a68370) (0xc000970000) Stream added, broadcasting: 3\nI1022 19:38:20.605629    1339 log.go:172] (0xc000a68370) Reply frame received for 3\nI1022 19:38:20.605658    1339 log.go:172] (0xc000a68370) (0xc0009a4000) Create stream\nI1022 19:38:20.605673    1339 log.go:172] (0xc000a68370) (0xc0009a4000) Stream added, broadcasting: 5\nI1022 19:38:20.606796    1339 log.go:172] (0xc000a68370) Reply frame received for 5\nI1022 19:38:20.677648    1339 log.go:172] (0xc000a68370) Data frame received for 5\nI1022 19:38:20.677676    1339 log.go:172] (0xc0009a4000) (5) Data frame handling\nI1022 19:38:20.677685    1339 log.go:172] (0xc0009a4000) (5) Data frame sent\nI1022 19:38:20.677691    1339 log.go:172] (0xc000a68370) Data frame received for 5\nI1022 19:38:20.677695    1339 log.go:172] (0xc0009a4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1022 19:38:20.677724    1339 log.go:172] (0xc000a68370) Data frame received for 3\nI1022 19:38:20.677730    1339 log.go:172] (0xc000970000) (3) Data frame handling\nI1022 19:38:20.677740    1339 log.go:172] (0xc000970000) (3) Data frame sent\nI1022 19:38:20.677744    1339 log.go:172] (0xc000a68370) Data frame received for 3\nI1022 19:38:20.677749    1339 log.go:172] (0xc000970000) (3) Data frame handling\nI1022 19:38:20.679285    1339 log.go:172] (0xc000a68370) Data frame received for 1\nI1022 19:38:20.679296    1339 log.go:172] (0xc0005c6780) (1) Data frame handling\nI1022 19:38:20.679302    1339 log.go:172] (0xc0005c6780) (1) Data frame sent\nI1022 19:38:20.679435    1339 log.go:172] (0xc000a68370) (0xc0005c6780) Stream removed, broadcasting: 1\nI1022 19:38:20.681626    1339 log.go:172] (0xc000a68370) Go away received\nI1022 19:38:20.682051    1339 log.go:172] (0xc000a68370) (0xc0005c6780) Stream removed, broadcasting: 1\nI1022 19:38:20.682075    1339 log.go:172] (0xc000a68370) (0xc000970000) Stream removed, broadcasting: 3\nI1022 19:38:20.682089    1339 log.go:172] (0xc000a68370) (0xc0009a4000) Stream removed, broadcasting: 5\n"
Oct 22 19:38:20.687: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Oct 22 19:38:20.687: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Oct 22 19:38:20.691: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Oct 22 19:38:30.696: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:38:30.696: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:38:30.696: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Oct 22 19:38:30.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Oct 22 19:38:30.908: INFO: stderr: "I1022 19:38:30.833934    1360 log.go:172] (0xc000ade420) (0xc000390820) Create stream\nI1022 19:38:30.834004    1360 log.go:172] (0xc000ade420) (0xc000390820) Stream added, broadcasting: 1\nI1022 19:38:30.839355    1360 log.go:172] (0xc000ade420) Reply frame received for 1\nI1022 19:38:30.839435    1360 log.go:172] (0xc000ade420) (0xc000390000) Create stream\nI1022 19:38:30.839467    1360 log.go:172] (0xc000ade420) (0xc000390000) Stream added, broadcasting: 3\nI1022 19:38:30.840735    1360 log.go:172] (0xc000ade420) Reply frame received for 3\nI1022 19:38:30.841048    1360 log.go:172] (0xc000ade420) (0xc0006063c0) Create stream\nI1022 19:38:30.841071    1360 log.go:172] (0xc000ade420) (0xc0006063c0) Stream added, broadcasting: 5\nI1022 19:38:30.842147    1360 log.go:172] (0xc000ade420) Reply frame received for 5\nI1022 19:38:30.901422    1360 log.go:172] (0xc000ade420) Data frame received for 5\nI1022 19:38:30.901460    1360 log.go:172] (0xc000ade420) Data frame received for 3\nI1022 19:38:30.901498    1360 log.go:172] (0xc0006063c0) (5) Data frame handling\nI1022 19:38:30.901524    1360 log.go:172] (0xc0006063c0) (5) Data frame sent\nI1022 19:38:30.901538    1360 log.go:172] (0xc000ade420) Data frame received for 5\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1022 19:38:30.901545    1360 log.go:172] (0xc0006063c0) (5) Data frame handling\nI1022 19:38:30.901595    1360 log.go:172] (0xc000390000) (3) Data frame handling\nI1022 19:38:30.901653    1360 log.go:172] (0xc000390000) (3) Data frame sent\nI1022 19:38:30.901671    1360 log.go:172] (0xc000ade420) Data frame received for 3\nI1022 19:38:30.901682    1360 log.go:172] (0xc000390000) (3) Data frame handling\nI1022 19:38:30.903478    1360 log.go:172] (0xc000ade420) Data frame received for 1\nI1022 19:38:30.903500    1360 log.go:172] (0xc000390820) (1) Data frame handling\nI1022 19:38:30.903518    1360 log.go:172] (0xc000390820) (1) Data frame sent\nI1022 19:38:30.903554    1360 log.go:172] (0xc000ade420) (0xc000390820) Stream removed, broadcasting: 1\nI1022 19:38:30.903597    1360 log.go:172] (0xc000ade420) Go away received\nI1022 19:38:30.904207    1360 log.go:172] (0xc000ade420) (0xc000390820) Stream removed, broadcasting: 1\nI1022 19:38:30.904251    1360 log.go:172] (0xc000ade420) (0xc000390000) Stream removed, broadcasting: 3\nI1022 19:38:30.904280    1360 log.go:172] (0xc000ade420) (0xc0006063c0) Stream removed, broadcasting: 5\n"
Oct 22 19:38:30.908: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Oct 22 19:38:30.908: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Oct 22 19:38:30.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Oct 22 19:38:31.149: INFO: stderr: "I1022 19:38:31.024929    1380 log.go:172] (0xc00010c790) (0xc00046f2c0) Create stream\nI1022 19:38:31.025000    1380 log.go:172] (0xc00010c790) (0xc00046f2c0) Stream added, broadcasting: 1\nI1022 19:38:31.027597    1380 log.go:172] (0xc00010c790) Reply frame received for 1\nI1022 19:38:31.027654    1380 log.go:172] (0xc00010c790) (0xc0003e3860) Create stream\nI1022 19:38:31.027670    1380 log.go:172] (0xc00010c790) (0xc0003e3860) Stream added, broadcasting: 3\nI1022 19:38:31.028982    1380 log.go:172] (0xc00010c790) Reply frame received for 3\nI1022 19:38:31.029016    1380 log.go:172] (0xc00010c790) (0xc0003e3900) Create stream\nI1022 19:38:31.029028    1380 log.go:172] (0xc00010c790) (0xc0003e3900) Stream added, broadcasting: 5\nI1022 19:38:31.030097    1380 log.go:172] (0xc00010c790) Reply frame received for 5\nI1022 19:38:31.105297    1380 log.go:172] (0xc00010c790) Data frame received for 5\nI1022 19:38:31.105324    1380 log.go:172] (0xc0003e3900) (5) Data frame handling\nI1022 19:38:31.105343    1380 log.go:172] (0xc0003e3900) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1022 19:38:31.141556    1380 log.go:172] (0xc00010c790) Data frame received for 5\nI1022 19:38:31.141606    1380 log.go:172] (0xc0003e3900) (5) Data frame handling\nI1022 19:38:31.141645    1380 log.go:172] (0xc00010c790) Data frame received for 3\nI1022 19:38:31.141667    1380 log.go:172] (0xc0003e3860) (3) Data frame handling\nI1022 19:38:31.141699    1380 log.go:172] (0xc0003e3860) (3) Data frame sent\nI1022 19:38:31.141722    1380 log.go:172] (0xc00010c790) Data frame received for 3\nI1022 19:38:31.141736    1380 log.go:172] (0xc0003e3860) (3) Data frame handling\nI1022 19:38:31.143817    1380 log.go:172] (0xc00010c790) Data frame received for 1\nI1022 19:38:31.143840    1380 log.go:172] (0xc00046f2c0) (1) Data frame handling\nI1022 19:38:31.143866    1380 log.go:172] (0xc00046f2c0) (1) Data frame sent\nI1022 19:38:31.143900    1380 log.go:172] (0xc00010c790) (0xc00046f2c0) Stream removed, broadcasting: 1\nI1022 19:38:31.143939    1380 log.go:172] (0xc00010c790) Go away received\nI1022 19:38:31.144274    1380 log.go:172] (0xc00010c790) (0xc00046f2c0) Stream removed, broadcasting: 1\nI1022 19:38:31.144303    1380 log.go:172] (0xc00010c790) (0xc0003e3860) Stream removed, broadcasting: 3\nI1022 19:38:31.144317    1380 log.go:172] (0xc00010c790) (0xc0003e3900) Stream removed, broadcasting: 5\n"
Oct 22 19:38:31.149: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Oct 22 19:38:31.149: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Oct 22 19:38:31.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Oct 22 19:38:31.371: INFO: stderr: "I1022 19:38:31.265172    1402 log.go:172] (0xc000950370) (0xc0007a45a0) Create stream\nI1022 19:38:31.265240    1402 log.go:172] (0xc000950370) (0xc0007a45a0) Stream added, broadcasting: 1\nI1022 19:38:31.274269    1402 log.go:172] (0xc000950370) Reply frame received for 1\nI1022 19:38:31.274300    1402 log.go:172] (0xc000950370) (0xc000908000) Create stream\nI1022 19:38:31.274311    1402 log.go:172] (0xc000950370) (0xc000908000) Stream added, broadcasting: 3\nI1022 19:38:31.280289    1402 log.go:172] (0xc000950370) Reply frame received for 3\nI1022 19:38:31.280328    1402 log.go:172] (0xc000950370) (0xc000604280) Create stream\nI1022 19:38:31.280345    1402 log.go:172] (0xc000950370) (0xc000604280) Stream added, broadcasting: 5\nI1022 19:38:31.281291    1402 log.go:172] (0xc000950370) Reply frame received for 5\nI1022 19:38:31.334619    1402 log.go:172] (0xc000950370) Data frame received for 5\nI1022 19:38:31.334645    1402 log.go:172] (0xc000604280) (5) Data frame handling\nI1022 19:38:31.334660    1402 log.go:172] (0xc000604280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1022 19:38:31.363236    1402 log.go:172] (0xc000950370) Data frame received for 3\nI1022 19:38:31.363290    1402 log.go:172] (0xc000908000) (3) Data frame handling\nI1022 19:38:31.363305    1402 log.go:172] (0xc000908000) (3) Data frame sent\nI1022 19:38:31.363316    1402 log.go:172] (0xc000950370) Data frame received for 3\nI1022 19:38:31.363331    1402 log.go:172] (0xc000908000) (3) Data frame handling\nI1022 19:38:31.363377    1402 log.go:172] (0xc000950370) Data frame received for 5\nI1022 19:38:31.363387    1402 log.go:172] (0xc000604280) (5) Data frame handling\nI1022 19:38:31.365277    1402 log.go:172] (0xc000950370) Data frame received for 1\nI1022 19:38:31.365300    1402 log.go:172] (0xc0007a45a0) (1) Data frame handling\nI1022 19:38:31.365321    1402 log.go:172] (0xc0007a45a0) (1) Data frame sent\nI1022 19:38:31.365330    1402 log.go:172] (0xc000950370) (0xc0007a45a0) Stream removed, broadcasting: 1\nI1022 19:38:31.365338    1402 log.go:172] (0xc000950370) Go away received\nI1022 19:38:31.365664    1402 log.go:172] (0xc000950370) (0xc0007a45a0) Stream removed, broadcasting: 1\nI1022 19:38:31.365688    1402 log.go:172] (0xc000950370) (0xc000908000) Stream removed, broadcasting: 3\nI1022 19:38:31.365699    1402 log.go:172] (0xc000950370) (0xc000604280) Stream removed, broadcasting: 5\n"
Oct 22 19:38:31.371: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Oct 22 19:38:31.371: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Oct 22 19:38:31.371: INFO: Waiting for statefulset status.replicas updated to 0
Oct 22 19:38:31.375: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Oct 22 19:38:41.383: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Oct 22 19:38:41.383: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Oct 22 19:38:41.383: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Oct 22 19:38:41.396: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct 22 19:38:41.396: INFO: ss-0  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:37:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:37:48 +0000 UTC  }]
Oct 22 19:38:41.396: INFO: ss-1  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:41.396: INFO: ss-2  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:41.396: INFO: 
Oct 22 19:38:41.396: INFO: StatefulSet ss has not reached scale 0, at 3
Oct 22 19:38:42.539: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct 22 19:38:42.539: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:37:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:37:48 +0000 UTC  }]
Oct 22 19:38:42.539: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:42.539: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:42.539: INFO: 
Oct 22 19:38:42.539: INFO: StatefulSet ss has not reached scale 0, at 3
Oct 22 19:38:43.869: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct 22 19:38:43.869: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:37:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:37:48 +0000 UTC  }]
Oct 22 19:38:43.869: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:43.869: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:43.869: INFO: 
Oct 22 19:38:43.869: INFO: StatefulSet ss has not reached scale 0, at 3
Oct 22 19:38:44.873: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct 22 19:38:44.873: INFO: ss-0  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:37:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:37:48 +0000 UTC  }]
Oct 22 19:38:44.873: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:44.874: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:44.874: INFO: 
Oct 22 19:38:44.874: INFO: StatefulSet ss has not reached scale 0, at 3
Oct 22 19:38:45.879: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct 22 19:38:45.879: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:45.879: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:45.879: INFO: 
Oct 22 19:38:45.879: INFO: StatefulSet ss has not reached scale 0, at 2
Oct 22 19:38:46.883: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct 22 19:38:46.883: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:46.883: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:46.883: INFO: 
Oct 22 19:38:46.883: INFO: StatefulSet ss has not reached scale 0, at 2
Oct 22 19:38:47.893: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct 22 19:38:47.893: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:47.893: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:47.893: INFO: 
Oct 22 19:38:47.893: INFO: StatefulSet ss has not reached scale 0, at 2
Oct 22 19:38:48.899: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct 22 19:38:48.899: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:48.899: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:48.899: INFO: 
Oct 22 19:38:48.899: INFO: StatefulSet ss has not reached scale 0, at 2
Oct 22 19:38:49.905: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct 22 19:38:49.905: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:49.905: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:49.905: INFO: 
Oct 22 19:38:49.905: INFO: StatefulSet ss has not reached scale 0, at 2
Oct 22 19:38:50.910: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Oct 22 19:38:50.910: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:50.910: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 19:38:09 +0000 UTC  }]
Oct 22 19:38:50.910: INFO: 
Oct 22 19:38:50.910: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5689
Oct 22 19:38:51.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:38:52.063: INFO: rc: 1
Oct 22 19:38:52.063: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001f7ff50 exit status 1   true [0xc002fd2118 0xc002fd2130 0xc002fd2148] [0xc002fd2118 0xc002fd2130 0xc002fd2148] [0xc002fd2128 0xc002fd2140] [0xba70e0 0xba70e0] 0xc002c5e7e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Oct 22 19:39:02.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:39:02.169: INFO: rc: 1
Oct 22 19:39:02.169: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0033e8750 exit status 1   true [0xc00090e830 0xc00090e858 0xc00090e878] [0xc00090e830 0xc00090e858 0xc00090e878] [0xc00090e850 0xc00090e868] [0xba70e0 0xba70e0] 0xc002ba1200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:39:12.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:39:12.271: INFO: rc: 1
Oct 22 19:39:12.271: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0033e8810 exit status 1   true [0xc00090e888 0xc00090e8c8 0xc00090e910] [0xc00090e888 0xc00090e8c8 0xc00090e910] [0xc00090e8a0 0xc00090e8f8] [0xba70e0 0xba70e0] 0xc002ba1920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:39:22.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:39:22.369: INFO: rc: 1
Oct 22 19:39:22.369: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00071b950 exit status 1   true [0xc002fd2150 0xc002fd2168 0xc002fd2180] [0xc002fd2150 0xc002fd2168 0xc002fd2180] [0xc002fd2160 0xc002fd2178] [0xba70e0 0xba70e0] 0xc002c5ee40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:39:32.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:39:32.466: INFO: rc: 1
Oct 22 19:39:32.466: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0033e8900 exit status 1   true [0xc00090e938 0xc00090e950 0xc00090ea48] [0xc00090e938 0xc00090e950 0xc00090ea48] [0xc00090e948 0xc00090ea08] [0xba70e0 0xba70e0] 0xc003744540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:39:42.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:39:42.558: INFO: rc: 1
Oct 22 19:39:42.558: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0033e89c0 exit status 1   true [0xc00090ea58 0xc00090ee18 0xc00090ee30] [0xc00090ea58 0xc00090ee18 0xc00090ee30] [0xc00090edf0 0xc00090ee28] [0xba70e0 0xba70e0] 0xc003744ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:39:52.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:39:52.646: INFO: rc: 1
Oct 22 19:39:52.646: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00071ba70 exit status 1   true [0xc002fd2188 0xc002fd21a0 0xc002fd21b8] [0xc002fd2188 0xc002fd21a0 0xc002fd21b8] [0xc002fd2198 0xc002fd21b0] [0xba70e0 0xba70e0] 0xc002c5f200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:40:02.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:40:02.740: INFO: rc: 1
Oct 22 19:40:02.740: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f7f8c0 exit status 1   true [0xc000010368 0xc0008e85b0 0xc0008e8c18] [0xc000010368 0xc0008e85b0 0xc0008e8c18] [0xc0008e8460 0xc0008e8a68] [0xba70e0 0xba70e0] 0xc0023f3e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:40:12.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:40:12.833: INFO: rc: 1
Oct 22 19:40:12.833: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0034b2090 exit status 1   true [0xc00072eec8 0xc00072f158 0xc00072f468] [0xc00072eec8 0xc00072f158 0xc00072f468] [0xc00072f0c8 0xc00072f318] [0xba70e0 0xba70e0] 0xc001f16de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:40:22.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:40:22.944: INFO: rc: 1
Oct 22 19:40:22.944: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0034b2150 exit status 1   true [0xc00072f4f0 0xc00072f760 0xc00072fb68] [0xc00072f4f0 0xc00072f760 0xc00072fb68] [0xc00072f718 0xc00072faa0] [0xba70e0 0xba70e0] 0xc001f179e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:40:32.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:40:36.363: INFO: rc: 1
Oct 22 19:40:36.363: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f7f980 exit status 1   true [0xc0008e8d20 0xc0008e8de8 0xc0008e8e40] [0xc0008e8d20 0xc0008e8de8 0xc0008e8e40] [0xc0008e8dd0 0xc0008e8e30] [0xba70e0 0xba70e0] 0xc002ba0c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:40:46.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:40:46.464: INFO: rc: 1
Oct 22 19:40:46.464: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f7fa70 exit status 1   true [0xc0008e8e78 0xc0008e9108 0xc0008e9560] [0xc0008e8e78 0xc0008e9108 0xc0008e9560] [0xc0008e9000 0xc0008e9358] [0xba70e0 0xba70e0] 0xc002ba1380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:40:56.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:40:56.569: INFO: rc: 1
Oct 22 19:40:56.569: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f7fb30 exit status 1   true [0xc0008e9570 0xc0008e9640 0xc0008e9820] [0xc0008e9570 0xc0008e9640 0xc0008e9820] [0xc0008e95f0 0xc0008e9728] [0xba70e0 0xba70e0] 0xc002ba1c20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:41:06.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:41:06.669: INFO: rc: 1
Oct 22 19:41:06.670: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0034b22d0 exit status 1   true [0xc00072fc20 0xc00090e020 0xc00090e0d8] [0xc00072fc20 0xc00090e020 0xc00090e0d8] [0xc00072fee0 0xc00090e068] [0xba70e0 0xba70e0] 0xc0028c5200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:41:16.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:41:16.773: INFO: rc: 1
Oct 22 19:41:16.773: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eea150 exit status 1   true [0xc00315a000 0xc00315a030 0xc00315a048] [0xc00315a000 0xc00315a030 0xc00315a048] [0xc00315a028 0xc00315a040] [0xba70e0 0xba70e0] 0xc001c98900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:41:26.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:41:26.871: INFO: rc: 1
Oct 22 19:41:26.871: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00267e120 exit status 1   true [0xc002fd2000 0xc002fd2018 0xc002fd2030] [0xc002fd2000 0xc002fd2018 0xc002fd2030] [0xc002fd2010 0xc002fd2028] [0xba70e0 0xba70e0] 0xc0024d42a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:41:36.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:41:36.968: INFO: rc: 1
Oct 22 19:41:36.968: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eea240 exit status 1   true [0xc00315a050 0xc00315a088 0xc00315a0b0] [0xc00315a050 0xc00315a088 0xc00315a0b0] [0xc00315a080 0xc00315a0a8] [0xba70e0 0xba70e0] 0xc001c98f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:41:46.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:41:47.072: INFO: rc: 1
Oct 22 19:41:47.072: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eea3c0 exit status 1   true [0xc00315a0b8 0xc00315a0d0 0xc00315a0e8] [0xc00315a0b8 0xc00315a0d0 0xc00315a0e8] [0xc00315a0c8 0xc00315a0e0] [0xba70e0 0xba70e0] 0xc001c99680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:41:57.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:41:57.159: INFO: rc: 1
Oct 22 19:41:57.159: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eea4e0 exit status 1   true [0xc00315a0f0 0xc00315a108 0xc00315a120] [0xc00315a0f0 0xc00315a108 0xc00315a120] [0xc00315a100 0xc00315a118] [0xba70e0 0xba70e0] 0xc003744480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:42:07.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:42:07.256: INFO: rc: 1
Oct 22 19:42:07.256: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f7f8f0 exit status 1   true [0xc00072efa8 0xc00072f2a0 0xc00072f4f0] [0xc00072efa8 0xc00072f2a0 0xc00072f4f0] [0xc00072f158 0xc00072f468] [0xba70e0 0xba70e0] 0xc001c98900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:42:17.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:42:17.360: INFO: rc: 1
Oct 22 19:42:17.360: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0034b20c0 exit status 1   true [0xc000010010 0xc0008e8460 0xc0008e8a68] [0xc000010010 0xc0008e8460 0xc0008e8a68] [0xc0008e81a0 0xc0008e8808] [0xba70e0 0xba70e0] 0xc001f16de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:42:27.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:42:27.460: INFO: rc: 1
Oct 22 19:42:27.460: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f7fa10 exit status 1   true [0xc00072f650 0xc00072f8b8 0xc00072fc20] [0xc00072f650 0xc00072f8b8 0xc00072fc20] [0xc00072f760 0xc00072fb68] [0xba70e0 0xba70e0] 0xc001c98f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:42:37.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:42:37.557: INFO: rc: 1
Oct 22 19:42:37.558: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0034b2210 exit status 1   true [0xc0008e8c18 0xc0008e8dd0 0xc0008e8e30] [0xc0008e8c18 0xc0008e8dd0 0xc0008e8e30] [0xc0008e8db0 0xc0008e8e08] [0xba70e0 0xba70e0] 0xc001f179e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:42:47.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:42:47.650: INFO: rc: 1
Oct 22 19:42:47.650: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0034b2330 exit status 1   true [0xc0008e8e40 0xc0008e9000 0xc0008e9358] [0xc0008e8e40 0xc0008e9000 0xc0008e9358] [0xc0008e8eb8 0xc0008e9268] [0xba70e0 0xba70e0] 0xc0023f3020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:42:57.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:42:57.750: INFO: rc: 1
Oct 22 19:42:57.750: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001f7fec0 exit status 1   true [0xc00072fd20 0xc00090e048 0xc00090e0f0] [0xc00072fd20 0xc00090e048 0xc00090e0f0] [0xc00090e020 0xc00090e0d8] [0xba70e0 0xba70e0] 0xc001c99680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:43:07.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:43:07.843: INFO: rc: 1
Oct 22 19:43:07.843: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00267e0c0 exit status 1   true [0xc00315a000 0xc00315a030 0xc00315a048] [0xc00315a000 0xc00315a030 0xc00315a048] [0xc00315a028 0xc00315a040] [0xba70e0 0xba70e0] 0xc002ba0a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:43:17.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:43:17.946: INFO: rc: 1
Oct 22 19:43:17.946: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eea0f0 exit status 1   true [0xc002fd2000 0xc002fd2018 0xc002fd2030] [0xc002fd2000 0xc002fd2018 0xc002fd2030] [0xc002fd2010 0xc002fd2028] [0xba70e0 0xba70e0] 0xc0028c54a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:43:27.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:43:28.049: INFO: rc: 1
Oct 22 19:43:28.049: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eea210 exit status 1   true [0xc002fd2040 0xc002fd2088 0xc002fd20c8] [0xc002fd2040 0xc002fd2088 0xc002fd20c8] [0xc002fd2068 0xc002fd20c0] [0xba70e0 0xba70e0] 0xc003744240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:43:38.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:43:38.145: INFO: rc: 1
Oct 22 19:43:38.145: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0034b2450 exit status 1   true [0xc0008e9560 0xc0008e95f0 0xc0008e9728] [0xc0008e9560 0xc0008e95f0 0xc0008e9728] [0xc0008e95c0 0xc0008e96d8] [0xba70e0 0xba70e0] 0xc0024d4240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:43:48.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:43:48.249: INFO: rc: 1
Oct 22 19:43:48.249: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0034b2510 exit status 1   true [0xc0008e9820 0xc0008e9a08 0xc0008e9b80] [0xc0008e9820 0xc0008e9a08 0xc0008e9b80] [0xc0008e9918 0xc0008e9b30] [0xba70e0 0xba70e0] 0xc0024d45a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Oct 22 19:43:58.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5689 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:43:58.357: INFO: rc: 1
Oct 22 19:43:58.357: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Oct 22 19:43:58.357: INFO: Scaling statefulset ss to 0
Oct 22 19:43:58.388: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Oct 22 19:43:58.391: INFO: Deleting all statefulset in ns statefulset-5689
Oct 22 19:43:58.393: INFO: Scaling statefulset ss to 0
Oct 22 19:43:58.401: INFO: Waiting for statefulset status.replicas updated to 0
Oct 22 19:43:58.403: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:43:58.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5689" for this suite.
Oct 22 19:44:04.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:44:04.507: INFO: namespace statefulset-5689 deletion completed in 6.086565597s

• [SLOW TEST:375.830 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:44:04.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 19:44:04.600: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e2743ff4-b4b2-4720-bef6-924df3d98d95" in namespace "projected-1283" to be "success or failure"
Oct 22 19:44:04.642: INFO: Pod "downwardapi-volume-e2743ff4-b4b2-4720-bef6-924df3d98d95": Phase="Pending", Reason="", readiness=false. Elapsed: 41.48492ms
Oct 22 19:44:06.646: INFO: Pod "downwardapi-volume-e2743ff4-b4b2-4720-bef6-924df3d98d95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0460224s
Oct 22 19:44:08.650: INFO: Pod "downwardapi-volume-e2743ff4-b4b2-4720-bef6-924df3d98d95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049998695s
STEP: Saw pod success
Oct 22 19:44:08.650: INFO: Pod "downwardapi-volume-e2743ff4-b4b2-4720-bef6-924df3d98d95" satisfied condition "success or failure"
Oct 22 19:44:08.653: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e2743ff4-b4b2-4720-bef6-924df3d98d95 container client-container: 
STEP: delete the pod
Oct 22 19:44:08.689: INFO: Waiting for pod downwardapi-volume-e2743ff4-b4b2-4720-bef6-924df3d98d95 to disappear
Oct 22 19:44:08.712: INFO: Pod downwardapi-volume-e2743ff4-b4b2-4720-bef6-924df3d98d95 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:44:08.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1283" for this suite.
Oct 22 19:44:14.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:44:14.809: INFO: namespace projected-1283 deletion completed in 6.092568055s

• [SLOW TEST:10.301 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:44:14.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 19:44:14.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6567'
Oct 22 19:44:15.140: INFO: stderr: ""
Oct 22 19:44:15.141: INFO: stdout: "replicationcontroller/redis-master created\n"
Oct 22 19:44:15.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6567'
Oct 22 19:44:15.460: INFO: stderr: ""
Oct 22 19:44:15.460: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Oct 22 19:44:16.497: INFO: Selector matched 1 pods for map[app:redis]
Oct 22 19:44:16.497: INFO: Found 0 / 1
Oct 22 19:44:17.464: INFO: Selector matched 1 pods for map[app:redis]
Oct 22 19:44:17.464: INFO: Found 0 / 1
Oct 22 19:44:18.464: INFO: Selector matched 1 pods for map[app:redis]
Oct 22 19:44:18.464: INFO: Found 0 / 1
Oct 22 19:44:19.463: INFO: Selector matched 1 pods for map[app:redis]
Oct 22 19:44:19.463: INFO: Found 1 / 1
Oct 22 19:44:19.463: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Oct 22 19:44:19.466: INFO: Selector matched 1 pods for map[app:redis]
Oct 22 19:44:19.466: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct 22 19:44:19.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-h4ttr --namespace=kubectl-6567'
Oct 22 19:44:19.587: INFO: stderr: ""
Oct 22 19:44:19.587: INFO: stdout: "Name:           redis-master-h4ttr\nNamespace:      kubectl-6567\nPriority:       0\nNode:           iruya-worker2/172.18.0.5\nStart Time:     Thu, 22 Oct 2020 19:44:15 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.2.248\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://5cc30a926e84a75441d8165ac90913ac393f2ff4db4a19acb67a188284087b03\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 22 Oct 2020 19:44:17 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7ph72 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-7ph72:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-7ph72\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                    Message\n  ----    ------     ----  ----                    -------\n  Normal  Scheduled  4s    default-scheduler       Successfully assigned kubectl-6567/redis-master-h4ttr to iruya-worker2\n  Normal  Pulled     3s    kubelet, iruya-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-worker2  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-worker2  Started container redis-master\n"
Oct 22 19:44:19.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-6567'
Oct 22 19:44:19.722: INFO: stderr: ""
Oct 22 19:44:19.722: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-6567\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: redis-master-h4ttr\n"
Oct 22 19:44:19.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-6567'
Oct 22 19:44:19.818: INFO: stderr: ""
Oct 22 19:44:19.818: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-6567\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.109.74.60\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.2.248:6379\nSession Affinity:  None\nEvents:            \n"
Oct 22 19:44:19.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Oct 22 19:44:19.941: INFO: stderr: ""
Oct 22 19:44:19.941: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 23 Sep 2020 08:25:31 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Thu, 22 Oct 2020 19:44:13 +0000   Wed, 23 Sep 2020 08:25:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Thu, 22 Oct 2020 19:44:13 +0000   Wed, 23 Sep 2020 08:25:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Thu, 22 Oct 2020 19:44:13 +0000   Wed, 23 Sep 2020 08:25:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Thu, 22 Oct 2020 19:44:13 +0000   Wed, 23 Sep 2020 08:26:01 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.7\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nSystem Info:\n Machine ID:                 75bedc8ea3a84920a6257d408ae4fc72\n System UUID:                f7c1d795-23db-4f0f-aa92-a051f5bbc85d\n Boot ID:                    b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version:             4.15.0-118-generic\n OS Image:                   Ubuntu 19.10\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.3.3-14-g449e9269\n Kubelet Version:            v1.15.11\n Kube-Proxy Version:         v1.15.11\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-ktm6r                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     29d\n  kube-system                coredns-5d4dd4b4db-m9gbg                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     29d\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         29d\n  kube-system                kindnet-rv6n5                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      29d\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         29d\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         29d\n  kube-system                kube-proxy-zcw5n                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29d\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         29d\n  local-path-storage         local-path-provisioner-668779bd7-t77bq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         29d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Oct 22 19:44:19.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6567'
Oct 22 19:44:20.045: INFO: stderr: ""
Oct 22 19:44:20.045: INFO: stdout: "Name:         kubectl-6567\nLabels:       e2e-framework=kubectl\n              e2e-run=2ca3cac9-56dc-4215-8ed6-81202124ad5e\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:44:20.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6567" for this suite.
Oct 22 19:44:42.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:44:42.183: INFO: namespace kubectl-6567 deletion completed in 22.134047845s

• [SLOW TEST:27.373 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:44:42.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6811.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6811.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6811.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6811.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6811.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6811.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 22 19:44:50.386: INFO: DNS probes using dns-6811/dns-test-3e10f901-a820-4919-9ecb-3ee51736b357 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:44:50.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6811" for this suite.
Oct 22 19:44:56.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:44:56.545: INFO: namespace dns-6811 deletion completed in 6.103229859s

• [SLOW TEST:14.361 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:44:56.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-tz8l
STEP: Creating a pod to test atomic-volume-subpath
Oct 22 19:44:56.659: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-tz8l" in namespace "subpath-2049" to be "success or failure"
Oct 22 19:44:56.662: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Pending", Reason="", readiness=false. Elapsed: 3.599133ms
Oct 22 19:44:58.666: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007546058s
Oct 22 19:45:00.670: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Running", Reason="", readiness=true. Elapsed: 4.011622415s
Oct 22 19:45:02.674: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Running", Reason="", readiness=true. Elapsed: 6.015730403s
Oct 22 19:45:04.679: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Running", Reason="", readiness=true. Elapsed: 8.019990037s
Oct 22 19:45:06.682: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Running", Reason="", readiness=true. Elapsed: 10.023331301s
Oct 22 19:45:08.686: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Running", Reason="", readiness=true. Elapsed: 12.027271878s
Oct 22 19:45:10.690: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Running", Reason="", readiness=true. Elapsed: 14.031947755s
Oct 22 19:45:12.695: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Running", Reason="", readiness=true. Elapsed: 16.036638062s
Oct 22 19:45:14.700: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Running", Reason="", readiness=true. Elapsed: 18.040965129s
Oct 22 19:45:16.704: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Running", Reason="", readiness=true. Elapsed: 20.045558116s
Oct 22 19:45:18.708: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Running", Reason="", readiness=true. Elapsed: 22.049907564s
Oct 22 19:45:20.721: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Running", Reason="", readiness=true. Elapsed: 24.062326365s
Oct 22 19:45:22.725: INFO: Pod "pod-subpath-test-secret-tz8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.066094723s
STEP: Saw pod success
Oct 22 19:45:22.725: INFO: Pod "pod-subpath-test-secret-tz8l" satisfied condition "success or failure"
Oct 22 19:45:22.727: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-tz8l container test-container-subpath-secret-tz8l: 
STEP: delete the pod
Oct 22 19:45:22.746: INFO: Waiting for pod pod-subpath-test-secret-tz8l to disappear
Oct 22 19:45:22.766: INFO: Pod pod-subpath-test-secret-tz8l no longer exists
STEP: Deleting pod pod-subpath-test-secret-tz8l
Oct 22 19:45:22.766: INFO: Deleting pod "pod-subpath-test-secret-tz8l" in namespace "subpath-2049"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:45:22.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2049" for this suite.
Oct 22 19:45:28.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:45:28.877: INFO: namespace subpath-2049 deletion completed in 6.08318029s

• [SLOW TEST:32.332 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:45:28.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Oct 22 19:45:28.947: INFO: Waiting up to 5m0s for pod "downward-api-c8498e46-91c1-463f-a38d-8377e9b7f869" in namespace "downward-api-4272" to be "success or failure"
Oct 22 19:45:28.965: INFO: Pod "downward-api-c8498e46-91c1-463f-a38d-8377e9b7f869": Phase="Pending", Reason="", readiness=false. Elapsed: 18.303716ms
Oct 22 19:45:30.970: INFO: Pod "downward-api-c8498e46-91c1-463f-a38d-8377e9b7f869": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023335555s
Oct 22 19:45:32.973: INFO: Pod "downward-api-c8498e46-91c1-463f-a38d-8377e9b7f869": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026840603s
STEP: Saw pod success
Oct 22 19:45:32.974: INFO: Pod "downward-api-c8498e46-91c1-463f-a38d-8377e9b7f869" satisfied condition "success or failure"
Oct 22 19:45:32.976: INFO: Trying to get logs from node iruya-worker2 pod downward-api-c8498e46-91c1-463f-a38d-8377e9b7f869 container dapi-container: 
STEP: delete the pod
Oct 22 19:45:33.009: INFO: Waiting for pod downward-api-c8498e46-91c1-463f-a38d-8377e9b7f869 to disappear
Oct 22 19:45:33.042: INFO: Pod downward-api-c8498e46-91c1-463f-a38d-8377e9b7f869 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:45:33.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4272" for this suite.
Oct 22 19:45:39.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:45:39.280: INFO: namespace downward-api-4272 deletion completed in 6.233087221s

• [SLOW TEST:10.402 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:45:39.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-l7js
STEP: Creating a pod to test atomic-volume-subpath
Oct 22 19:45:39.372: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-l7js" in namespace "subpath-2012" to be "success or failure"
Oct 22 19:45:39.376: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Pending", Reason="", readiness=false. Elapsed: 3.426604ms
Oct 22 19:45:41.380: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007544136s
Oct 22 19:45:43.384: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Running", Reason="", readiness=true. Elapsed: 4.011738809s
Oct 22 19:45:45.388: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Running", Reason="", readiness=true. Elapsed: 6.015959665s
Oct 22 19:45:47.393: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Running", Reason="", readiness=true. Elapsed: 8.020415314s
Oct 22 19:45:49.397: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Running", Reason="", readiness=true. Elapsed: 10.025229066s
Oct 22 19:45:51.402: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Running", Reason="", readiness=true. Elapsed: 12.02955455s
Oct 22 19:45:53.406: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Running", Reason="", readiness=true. Elapsed: 14.033688761s
Oct 22 19:45:55.411: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Running", Reason="", readiness=true. Elapsed: 16.038782734s
Oct 22 19:45:57.415: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Running", Reason="", readiness=true. Elapsed: 18.043129032s
Oct 22 19:45:59.421: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Running", Reason="", readiness=true. Elapsed: 20.049226328s
Oct 22 19:46:01.426: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Running", Reason="", readiness=true. Elapsed: 22.053649615s
Oct 22 19:46:03.429: INFO: Pod "pod-subpath-test-downwardapi-l7js": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056956744s
STEP: Saw pod success
Oct 22 19:46:03.429: INFO: Pod "pod-subpath-test-downwardapi-l7js" satisfied condition "success or failure"
Oct 22 19:46:03.431: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-l7js container test-container-subpath-downwardapi-l7js: 
STEP: delete the pod
Oct 22 19:46:03.520: INFO: Waiting for pod pod-subpath-test-downwardapi-l7js to disappear
Oct 22 19:46:03.587: INFO: Pod pod-subpath-test-downwardapi-l7js no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-l7js
Oct 22 19:46:03.587: INFO: Deleting pod "pod-subpath-test-downwardapi-l7js" in namespace "subpath-2012"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:46:03.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2012" for this suite.
Oct 22 19:46:09.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:46:09.729: INFO: namespace subpath-2012 deletion completed in 6.134903819s

• [SLOW TEST:30.449 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:46:09.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 19:46:09.769: INFO: Creating ReplicaSet my-hostname-basic-929c5b95-4a9f-44bf-b658-2cdc55c2f284
Oct 22 19:46:09.787: INFO: Pod name my-hostname-basic-929c5b95-4a9f-44bf-b658-2cdc55c2f284: Found 0 pods out of 1
Oct 22 19:46:14.792: INFO: Pod name my-hostname-basic-929c5b95-4a9f-44bf-b658-2cdc55c2f284: Found 1 pods out of 1
Oct 22 19:46:14.792: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-929c5b95-4a9f-44bf-b658-2cdc55c2f284" is running
Oct 22 19:46:14.795: INFO: Pod "my-hostname-basic-929c5b95-4a9f-44bf-b658-2cdc55c2f284-z8st5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-22 19:46:09 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-22 19:46:12 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-22 19:46:12 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-22 19:46:09 +0000 UTC Reason: Message:}])
Oct 22 19:46:14.795: INFO: Trying to dial the pod
Oct 22 19:46:19.808: INFO: Controller my-hostname-basic-929c5b95-4a9f-44bf-b658-2cdc55c2f284: Got expected result from replica 1 [my-hostname-basic-929c5b95-4a9f-44bf-b658-2cdc55c2f284-z8st5]: "my-hostname-basic-929c5b95-4a9f-44bf-b658-2cdc55c2f284-z8st5", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:46:19.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5453" for this suite.
Oct 22 19:46:25.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:46:25.914: INFO: namespace replicaset-5453 deletion completed in 6.102628393s

• [SLOW TEST:16.185 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:46:25.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-2mrwn in namespace proxy-9984
I1022 19:46:26.062150       6 runners.go:180] Created replication controller with name: proxy-service-2mrwn, namespace: proxy-9984, replica count: 1
I1022 19:46:27.112591       6 runners.go:180] proxy-service-2mrwn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1022 19:46:28.112824       6 runners.go:180] proxy-service-2mrwn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1022 19:46:29.113154       6 runners.go:180] proxy-service-2mrwn Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1022 19:46:30.113447       6 runners.go:180] proxy-service-2mrwn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1022 19:46:31.113685       6 runners.go:180] proxy-service-2mrwn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1022 19:46:32.113930       6 runners.go:180] proxy-service-2mrwn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1022 19:46:33.114142       6 runners.go:180] proxy-service-2mrwn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1022 19:46:34.114343       6 runners.go:180] proxy-service-2mrwn Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1022 19:46:35.114519       6 runners.go:180] proxy-service-2mrwn Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 22 19:46:35.118: INFO: setup took 9.142940868s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Oct 22 19:46:35.126: INFO: (0) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 7.296046ms)
Oct 22 19:46:35.126: INFO: (0) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 7.313137ms)
Oct 22 19:46:35.126: INFO: (0) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 7.801785ms)
Oct 22 19:46:35.126: INFO: (0) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 7.840344ms)
Oct 22 19:46:35.126: INFO: (0) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 7.933147ms)
Oct 22 19:46:35.126: INFO: (0) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 8.017802ms)
Oct 22 19:46:35.126: INFO: (0) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 8.074823ms)
Oct 22 19:46:35.130: INFO: (0) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 11.021494ms)
Oct 22 19:46:35.130: INFO: (0) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 11.021697ms)
Oct 22 19:46:35.130: INFO: (0) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 11.326448ms)
Oct 22 19:46:35.130: INFO: (0) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 11.14845ms)
Oct 22 19:46:35.132: INFO: (0) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: test (200; 4.630111ms)
Oct 22 19:46:35.138: INFO: (1) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 4.888932ms)
Oct 22 19:46:35.138: INFO: (1) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 5.01505ms)
Oct 22 19:46:35.138: INFO: (1) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 5.020022ms)
Oct 22 19:46:35.138: INFO: (1) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 5.153612ms)
Oct 22 19:46:35.149: INFO: (1) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 15.269366ms)
Oct 22 19:46:35.149: INFO: (1) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 15.316602ms)
Oct 22 19:46:35.149: INFO: (1) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 15.38385ms)
Oct 22 19:46:35.149: INFO: (1) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: ... (200; 3.270578ms)
Oct 22 19:46:35.153: INFO: (2) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 3.7652ms)
Oct 22 19:46:35.153: INFO: (2) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 4.400775ms)
Oct 22 19:46:35.153: INFO: (2) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 4.440467ms)
Oct 22 19:46:35.153: INFO: (2) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.467403ms)
Oct 22 19:46:35.153: INFO: (2) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:460/proxy/: tls baz (200; 4.579104ms)
Oct 22 19:46:35.153: INFO: (2) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 4.496102ms)
Oct 22 19:46:35.153: INFO: (2) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 4.558515ms)
Oct 22 19:46:35.153: INFO: (2) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 4.572715ms)
Oct 22 19:46:35.153: INFO: (2) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: ... (200; 3.609012ms)
Oct 22 19:46:35.158: INFO: (3) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 4.206427ms)
Oct 22 19:46:35.158: INFO: (3) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.545626ms)
Oct 22 19:46:35.158: INFO: (3) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 4.641038ms)
Oct 22 19:46:35.159: INFO: (3) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 4.640247ms)
Oct 22 19:46:35.159: INFO: (3) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname1/proxy/: tls baz (200; 4.845211ms)
Oct 22 19:46:35.159: INFO: (3) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 4.852409ms)
Oct 22 19:46:35.159: INFO: (3) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 4.900678ms)
Oct 22 19:46:35.159: INFO: (3) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 5.104278ms)
Oct 22 19:46:35.159: INFO: (3) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 5.064451ms)
Oct 22 19:46:35.159: INFO: (3) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 5.114146ms)
Oct 22 19:46:35.159: INFO: (3) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: test (200; 5.247786ms)
Oct 22 19:46:35.159: INFO: (3) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:460/proxy/: tls baz (200; 5.263136ms)
Oct 22 19:46:35.162: INFO: (4) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 2.968686ms)
Oct 22 19:46:35.162: INFO: (4) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 3.186678ms)
Oct 22 19:46:35.162: INFO: (4) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 3.144359ms)
Oct 22 19:46:35.163: INFO: (4) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 4.005509ms)
Oct 22 19:46:35.163: INFO: (4) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: test (200; 3.996654ms)
Oct 22 19:46:35.169: INFO: (5) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 4.621523ms)
Oct 22 19:46:35.169: INFO: (5) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.689479ms)
Oct 22 19:46:35.169: INFO: (5) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 4.715856ms)
Oct 22 19:46:35.169: INFO: (5) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 4.785599ms)
Oct 22 19:46:35.169: INFO: (5) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 4.763706ms)
Oct 22 19:46:35.169: INFO: (5) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.819365ms)
Oct 22 19:46:35.170: INFO: (5) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 5.706537ms)
Oct 22 19:46:35.170: INFO: (5) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 5.705525ms)
Oct 22 19:46:35.170: INFO: (5) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 6.112458ms)
Oct 22 19:46:35.170: INFO: (5) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 6.229492ms)
Oct 22 19:46:35.170: INFO: (5) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname1/proxy/: tls baz (200; 6.300519ms)
Oct 22 19:46:35.170: INFO: (5) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 6.288096ms)
Oct 22 19:46:35.174: INFO: (6) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 3.76001ms)
Oct 22 19:46:35.174: INFO: (6) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.029442ms)
Oct 22 19:46:35.175: INFO: (6) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.336468ms)
Oct 22 19:46:35.175: INFO: (6) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 4.361345ms)
Oct 22 19:46:35.175: INFO: (6) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 4.40713ms)
Oct 22 19:46:35.175: INFO: (6) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 4.393924ms)
Oct 22 19:46:35.175: INFO: (6) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: test<... (200; 4.392424ms)
Oct 22 19:46:35.176: INFO: (6) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 5.119452ms)
Oct 22 19:46:35.176: INFO: (6) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:460/proxy/: tls baz (200; 5.079466ms)
Oct 22 19:46:35.178: INFO: (6) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 7.078176ms)
Oct 22 19:46:35.178: INFO: (6) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 7.459781ms)
Oct 22 19:46:35.178: INFO: (6) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 7.421472ms)
Oct 22 19:46:35.178: INFO: (6) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 7.523964ms)
Oct 22 19:46:35.178: INFO: (6) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 7.542344ms)
Oct 22 19:46:35.178: INFO: (6) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname1/proxy/: tls baz (200; 7.512599ms)
Oct 22 19:46:35.181: INFO: (7) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: test (200; 4.280552ms)
Oct 22 19:46:35.183: INFO: (7) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 4.455343ms)
Oct 22 19:46:35.183: INFO: (7) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 4.739675ms)
Oct 22 19:46:35.183: INFO: (7) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 4.79556ms)
Oct 22 19:46:35.183: INFO: (7) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname1/proxy/: tls baz (200; 4.710324ms)
Oct 22 19:46:35.183: INFO: (7) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 4.790524ms)
Oct 22 19:46:35.183: INFO: (7) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 4.907551ms)
Oct 22 19:46:35.183: INFO: (7) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 4.939247ms)
Oct 22 19:46:35.183: INFO: (7) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 4.921517ms)
Oct 22 19:46:35.183: INFO: (7) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 5.00362ms)
Oct 22 19:46:35.183: INFO: (7) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 5.266558ms)
Oct 22 19:46:35.183: INFO: (7) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 5.243216ms)
Oct 22 19:46:35.186: INFO: (8) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: test<... (200; 3.645209ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 3.995494ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 4.064715ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 3.963965ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.021759ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 4.060939ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 4.357326ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 4.417716ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:460/proxy/: tls baz (200; 4.870435ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 4.77017ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 4.835206ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.790001ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname1/proxy/: tls baz (200; 4.847898ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 4.897155ms)
Oct 22 19:46:35.188: INFO: (8) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 4.844792ms)
Oct 22 19:46:35.191: INFO: (9) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 2.761721ms)
Oct 22 19:46:35.192: INFO: (9) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 3.226783ms)
Oct 22 19:46:35.193: INFO: (9) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 4.267067ms)
Oct 22 19:46:35.193: INFO: (9) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 4.236799ms)
Oct 22 19:46:35.193: INFO: (9) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.244327ms)
Oct 22 19:46:35.193: INFO: (9) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 4.326733ms)
Oct 22 19:46:35.193: INFO: (9) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 4.363747ms)
Oct 22 19:46:35.193: INFO: (9) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 4.278763ms)
Oct 22 19:46:35.193: INFO: (9) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 4.346779ms)
Oct 22 19:46:35.193: INFO: (9) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 4.729674ms)
Oct 22 19:46:35.194: INFO: (9) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:460/proxy/: tls baz (200; 4.9249ms)
Oct 22 19:46:35.194: INFO: (9) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 5.026381ms)
Oct 22 19:46:35.194: INFO: (9) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 5.116955ms)
Oct 22 19:46:35.194: INFO: (9) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 5.054693ms)
Oct 22 19:46:35.194: INFO: (9) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: ... (200; 3.690019ms)
Oct 22 19:46:35.198: INFO: (10) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 3.835577ms)
Oct 22 19:46:35.198: INFO: (10) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 3.822357ms)
Oct 22 19:46:35.198: INFO: (10) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 3.986301ms)
Oct 22 19:46:35.199: INFO: (10) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 5.206801ms)
Oct 22 19:46:35.199: INFO: (10) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 5.29557ms)
Oct 22 19:46:35.200: INFO: (10) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 5.739249ms)
Oct 22 19:46:35.200: INFO: (10) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 5.642421ms)
Oct 22 19:46:35.200: INFO: (10) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 5.729853ms)
Oct 22 19:46:35.200: INFO: (10) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: ... (200; 3.119343ms)
Oct 22 19:46:35.203: INFO: (11) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 3.398009ms)
Oct 22 19:46:35.203: INFO: (11) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 3.490337ms)
Oct 22 19:46:35.203: INFO: (11) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 3.395756ms)
Oct 22 19:46:35.203: INFO: (11) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: test<... (200; 3.568242ms)
Oct 22 19:46:35.203: INFO: (11) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 3.488119ms)
Oct 22 19:46:35.203: INFO: (11) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 3.555356ms)
Oct 22 19:46:35.203: INFO: (11) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 3.621063ms)
Oct 22 19:46:35.203: INFO: (11) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:460/proxy/: tls baz (200; 3.690926ms)
Oct 22 19:46:35.204: INFO: (11) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 3.803481ms)
Oct 22 19:46:35.205: INFO: (11) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 4.950829ms)
Oct 22 19:46:35.205: INFO: (11) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 5.106315ms)
Oct 22 19:46:35.205: INFO: (11) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 5.071178ms)
Oct 22 19:46:35.205: INFO: (11) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname1/proxy/: tls baz (200; 5.350497ms)
Oct 22 19:46:35.205: INFO: (11) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 5.324079ms)
Oct 22 19:46:35.209: INFO: (12) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: test<... (200; 3.448459ms)
Oct 22 19:46:35.209: INFO: (12) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 3.507931ms)
Oct 22 19:46:35.209: INFO: (12) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 3.477633ms)
Oct 22 19:46:35.209: INFO: (12) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 3.581722ms)
Oct 22 19:46:35.209: INFO: (12) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 3.453536ms)
Oct 22 19:46:35.209: INFO: (12) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 3.531168ms)
Oct 22 19:46:35.209: INFO: (12) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 3.50871ms)
Oct 22 19:46:35.209: INFO: (12) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 3.489354ms)
Oct 22 19:46:35.209: INFO: (12) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:460/proxy/: tls baz (200; 3.572936ms)
Oct 22 19:46:35.210: INFO: (12) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 4.361613ms)
Oct 22 19:46:35.210: INFO: (12) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 4.496859ms)
Oct 22 19:46:35.210: INFO: (12) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname1/proxy/: tls baz (200; 4.560204ms)
Oct 22 19:46:35.210: INFO: (12) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 4.596426ms)
Oct 22 19:46:35.210: INFO: (12) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 4.890053ms)
Oct 22 19:46:35.210: INFO: (12) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 5.000915ms)
Oct 22 19:46:35.215: INFO: (13) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 4.552659ms)
Oct 22 19:46:35.215: INFO: (13) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.53511ms)
Oct 22 19:46:35.215: INFO: (13) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 4.647142ms)
Oct 22 19:46:35.216: INFO: (13) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 5.515177ms)
Oct 22 19:46:35.216: INFO: (13) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 5.509223ms)
Oct 22 19:46:35.216: INFO: (13) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 5.778159ms)
Oct 22 19:46:35.216: INFO: (13) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 5.779527ms)
Oct 22 19:46:35.216: INFO: (13) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 5.772411ms)
Oct 22 19:46:35.216: INFO: (13) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 5.854919ms)
Oct 22 19:46:35.216: INFO: (13) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 5.805075ms)
Oct 22 19:46:35.216: INFO: (13) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 5.805481ms)
Oct 22 19:46:35.216: INFO: (13) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: ... (200; 3.046711ms)
Oct 22 19:46:35.220: INFO: (14) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 3.380434ms)
Oct 22 19:46:35.220: INFO: (14) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 3.39383ms)
Oct 22 19:46:35.220: INFO: (14) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 3.475137ms)
Oct 22 19:46:35.220: INFO: (14) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 3.463166ms)
Oct 22 19:46:35.220: INFO: (14) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 3.642172ms)
Oct 22 19:46:35.220: INFO: (14) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 3.624786ms)
Oct 22 19:46:35.220: INFO: (14) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: ... (200; 3.894712ms)
Oct 22 19:46:35.225: INFO: (15) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 3.978118ms)
Oct 22 19:46:35.225: INFO: (15) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 3.931607ms)
Oct 22 19:46:35.226: INFO: (15) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 4.100226ms)
Oct 22 19:46:35.226: INFO: (15) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 4.284587ms)
Oct 22 19:46:35.226: INFO: (15) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 4.338879ms)
Oct 22 19:46:35.226: INFO: (15) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 4.444526ms)
Oct 22 19:46:35.226: INFO: (15) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 4.471286ms)
Oct 22 19:46:35.226: INFO: (15) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 4.729186ms)
Oct 22 19:46:35.226: INFO: (15) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 4.863982ms)
Oct 22 19:46:35.226: INFO: (15) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname1/proxy/: tls baz (200; 5.001621ms)
Oct 22 19:46:35.227: INFO: (15) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 5.125392ms)
Oct 22 19:46:35.227: INFO: (15) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 5.118285ms)
Oct 22 19:46:35.227: INFO: (15) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 5.231571ms)
Oct 22 19:46:35.230: INFO: (16) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:460/proxy/: tls baz (200; 3.120729ms)
Oct 22 19:46:35.231: INFO: (16) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 4.404804ms)
Oct 22 19:46:35.231: INFO: (16) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 4.406747ms)
Oct 22 19:46:35.231: INFO: (16) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 4.506499ms)
Oct 22 19:46:35.232: INFO: (16) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: test<... (200; 5.222929ms)
Oct 22 19:46:35.232: INFO: (16) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname1/proxy/: tls baz (200; 5.221719ms)
Oct 22 19:46:35.232: INFO: (16) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 5.248174ms)
Oct 22 19:46:35.232: INFO: (16) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 5.21781ms)
Oct 22 19:46:35.232: INFO: (16) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 5.254583ms)
Oct 22 19:46:35.236: INFO: (17) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 4.450701ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 4.584022ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname1/proxy/: tls baz (200; 4.61247ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 4.586608ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 4.667982ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 4.767796ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 5.184358ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 5.193139ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 5.215172ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 5.178691ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: test (200; 5.228678ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:460/proxy/: tls baz (200; 5.19352ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 5.223257ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 5.23722ms)
Oct 22 19:46:35.237: INFO: (17) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 5.278338ms)
Oct 22 19:46:35.240: INFO: (18) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:1080/proxy/: test<... (200; 2.750268ms)
Oct 22 19:46:35.240: INFO: (18) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:460/proxy/: tls baz (200; 2.911937ms)
Oct 22 19:46:35.241: INFO: (18) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: test (200; 3.834504ms)
Oct 22 19:46:35.241: INFO: (18) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 3.939942ms)
Oct 22 19:46:35.241: INFO: (18) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.01432ms)
Oct 22 19:46:35.241: INFO: (18) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 3.998812ms)
Oct 22 19:46:35.241: INFO: (18) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 4.011253ms)
Oct 22 19:46:35.242: INFO: (18) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 4.306912ms)
Oct 22 19:46:35.242: INFO: (18) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 4.420622ms)
Oct 22 19:46:35.242: INFO: (18) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 4.655354ms)
Oct 22 19:46:35.242: INFO: (18) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname1/proxy/: tls baz (200; 4.624598ms)
Oct 22 19:46:35.242: INFO: (18) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 4.617467ms)
Oct 22 19:46:35.242: INFO: (18) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 4.644362ms)
Oct 22 19:46:35.242: INFO: (18) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 4.62414ms)
Oct 22 19:46:35.246: INFO: (19) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:162/proxy/: bar (200; 3.883163ms)
Oct 22 19:46:35.246: INFO: (19) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:443/proxy/: test<... (200; 4.75342ms)
Oct 22 19:46:35.247: INFO: (19) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname1/proxy/: foo (200; 4.770404ms)
Oct 22 19:46:35.247: INFO: (19) /api/v1/namespaces/proxy-9984/services/proxy-service-2mrwn:portname2/proxy/: bar (200; 4.796469ms)
Oct 22 19:46:35.247: INFO: (19) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname2/proxy/: bar (200; 4.75213ms)
Oct 22 19:46:35.247: INFO: (19) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname2/proxy/: tls qux (200; 4.771683ms)
Oct 22 19:46:35.247: INFO: (19) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.85196ms)
Oct 22 19:46:35.247: INFO: (19) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p/proxy/: test (200; 4.810022ms)
Oct 22 19:46:35.247: INFO: (19) /api/v1/namespaces/proxy-9984/pods/proxy-service-2mrwn-kn95p:160/proxy/: foo (200; 4.790652ms)
Oct 22 19:46:35.247: INFO: (19) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:460/proxy/: tls baz (200; 4.83268ms)
Oct 22 19:46:35.247: INFO: (19) /api/v1/namespaces/proxy-9984/services/https:proxy-service-2mrwn:tlsportname1/proxy/: tls baz (200; 4.914475ms)
Oct 22 19:46:35.247: INFO: (19) /api/v1/namespaces/proxy-9984/services/http:proxy-service-2mrwn:portname1/proxy/: foo (200; 4.893209ms)
Oct 22 19:46:35.247: INFO: (19) /api/v1/namespaces/proxy-9984/pods/http:proxy-service-2mrwn-kn95p:1080/proxy/: ... (200; 4.909779ms)
Oct 22 19:46:35.247: INFO: (19) /api/v1/namespaces/proxy-9984/pods/https:proxy-service-2mrwn-kn95p:462/proxy/: tls qux (200; 4.915105ms)
STEP: deleting ReplicationController proxy-service-2mrwn in namespace proxy-9984, will wait for the garbage collector to delete the pods
Oct 22 19:46:35.306: INFO: Deleting ReplicationController proxy-service-2mrwn took: 6.81712ms
Oct 22 19:46:35.406: INFO: Terminating ReplicationController proxy-service-2mrwn pods took: 100.265282ms
[AfterEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:46:45.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9984" for this suite.
Oct 22 19:46:51.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:46:51.520: INFO: namespace proxy-9984 deletion completed in 6.105241716s

• [SLOW TEST:25.605 seconds]
[sig-network] Proxy
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:46:51.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Oct 22 19:46:51.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8698'
Oct 22 19:46:51.827: INFO: stderr: ""
Oct 22 19:46:51.827: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct 22 19:46:51.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8698'
Oct 22 19:46:52.039: INFO: stderr: ""
Oct 22 19:46:52.039: INFO: stdout: "update-demo-nautilus-p2kdx update-demo-nautilus-v24kb "
Oct 22 19:46:52.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p2kdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:46:52.163: INFO: stderr: ""
Oct 22 19:46:52.163: INFO: stdout: ""
Oct 22 19:46:52.163: INFO: update-demo-nautilus-p2kdx is created but not running
Oct 22 19:46:57.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8698'
Oct 22 19:46:57.257: INFO: stderr: ""
Oct 22 19:46:57.257: INFO: stdout: "update-demo-nautilus-p2kdx update-demo-nautilus-v24kb "
Oct 22 19:46:57.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p2kdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:46:57.357: INFO: stderr: ""
Oct 22 19:46:57.357: INFO: stdout: "true"
Oct 22 19:46:57.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p2kdx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:46:57.448: INFO: stderr: ""
Oct 22 19:46:57.448: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct 22 19:46:57.448: INFO: validating pod update-demo-nautilus-p2kdx
Oct 22 19:46:57.452: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct 22 19:46:57.452: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 22 19:46:57.452: INFO: update-demo-nautilus-p2kdx is verified up and running
Oct 22 19:46:57.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v24kb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:46:57.546: INFO: stderr: ""
Oct 22 19:46:57.546: INFO: stdout: "true"
Oct 22 19:46:57.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v24kb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:46:57.647: INFO: stderr: ""
Oct 22 19:46:57.647: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct 22 19:46:57.647: INFO: validating pod update-demo-nautilus-v24kb
Oct 22 19:46:57.651: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct 22 19:46:57.651: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 22 19:46:57.651: INFO: update-demo-nautilus-v24kb is verified up and running
STEP: scaling down the replication controller
Oct 22 19:46:57.654: INFO: scanned /root for discovery docs: 
Oct 22 19:46:57.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8698'
Oct 22 19:46:58.796: INFO: stderr: ""
Oct 22 19:46:58.796: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct 22 19:46:58.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8698'
Oct 22 19:46:58.893: INFO: stderr: ""
Oct 22 19:46:58.893: INFO: stdout: "update-demo-nautilus-p2kdx update-demo-nautilus-v24kb "
STEP: Replicas for name=update-demo: expected=1 actual=2
Oct 22 19:47:03.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8698'
Oct 22 19:47:03.990: INFO: stderr: ""
Oct 22 19:47:03.990: INFO: stdout: "update-demo-nautilus-p2kdx update-demo-nautilus-v24kb "
STEP: Replicas for name=update-demo: expected=1 actual=2
Oct 22 19:47:08.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8698'
Oct 22 19:47:09.088: INFO: stderr: ""
Oct 22 19:47:09.088: INFO: stdout: "update-demo-nautilus-p2kdx "
Oct 22 19:47:09.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p2kdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:47:09.187: INFO: stderr: ""
Oct 22 19:47:09.187: INFO: stdout: "true"
Oct 22 19:47:09.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p2kdx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:47:09.286: INFO: stderr: ""
Oct 22 19:47:09.286: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct 22 19:47:09.286: INFO: validating pod update-demo-nautilus-p2kdx
Oct 22 19:47:09.290: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct 22 19:47:09.290: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 22 19:47:09.290: INFO: update-demo-nautilus-p2kdx is verified up and running
STEP: scaling up the replication controller
Oct 22 19:47:09.292: INFO: scanned /root for discovery docs: 
Oct 22 19:47:09.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8698'
Oct 22 19:47:10.517: INFO: stderr: ""
Oct 22 19:47:10.517: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Oct 22 19:47:10.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8698'
Oct 22 19:47:10.616: INFO: stderr: ""
Oct 22 19:47:10.617: INFO: stdout: "update-demo-nautilus-p2kdx update-demo-nautilus-s5fs8 "
Oct 22 19:47:10.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p2kdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:47:10.709: INFO: stderr: ""
Oct 22 19:47:10.709: INFO: stdout: "true"
Oct 22 19:47:10.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p2kdx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:47:10.806: INFO: stderr: ""
Oct 22 19:47:10.806: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct 22 19:47:10.806: INFO: validating pod update-demo-nautilus-p2kdx
Oct 22 19:47:10.809: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct 22 19:47:10.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 22 19:47:10.809: INFO: update-demo-nautilus-p2kdx is verified up and running
Oct 22 19:47:10.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s5fs8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:47:10.913: INFO: stderr: ""
Oct 22 19:47:10.913: INFO: stdout: ""
Oct 22 19:47:10.913: INFO: update-demo-nautilus-s5fs8 is created but not running
Oct 22 19:47:15.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8698'
Oct 22 19:47:16.009: INFO: stderr: ""
Oct 22 19:47:16.009: INFO: stdout: "update-demo-nautilus-p2kdx update-demo-nautilus-s5fs8 "
Oct 22 19:47:16.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p2kdx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:47:16.093: INFO: stderr: ""
Oct 22 19:47:16.093: INFO: stdout: "true"
Oct 22 19:47:16.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p2kdx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:47:16.184: INFO: stderr: ""
Oct 22 19:47:16.184: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct 22 19:47:16.184: INFO: validating pod update-demo-nautilus-p2kdx
Oct 22 19:47:16.187: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct 22 19:47:16.187: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 22 19:47:16.187: INFO: update-demo-nautilus-p2kdx is verified up and running
Oct 22 19:47:16.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s5fs8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:47:16.285: INFO: stderr: ""
Oct 22 19:47:16.285: INFO: stdout: "true"
Oct 22 19:47:16.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s5fs8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8698'
Oct 22 19:47:16.372: INFO: stderr: ""
Oct 22 19:47:16.373: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Oct 22 19:47:16.373: INFO: validating pod update-demo-nautilus-s5fs8
Oct 22 19:47:16.376: INFO: got data: {
  "image": "nautilus.jpg"
}

Oct 22 19:47:16.377: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Oct 22 19:47:16.377: INFO: update-demo-nautilus-s5fs8 is verified up and running
STEP: using delete to clean up resources
Oct 22 19:47:16.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8698'
Oct 22 19:47:16.471: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct 22 19:47:16.471: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Oct 22 19:47:16.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8698'
Oct 22 19:47:16.605: INFO: stderr: "No resources found.\n"
Oct 22 19:47:16.605: INFO: stdout: ""
Oct 22 19:47:16.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8698 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 22 19:47:16.774: INFO: stderr: ""
Oct 22 19:47:16.774: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:47:16.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8698" for this suite.
Oct 22 19:47:39.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:47:39.182: INFO: namespace kubectl-8698 deletion completed in 22.401967217s

• [SLOW TEST:47.661 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:47:39.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 19:47:39.267: INFO: Create a RollingUpdate DaemonSet
Oct 22 19:47:39.270: INFO: Check that daemon pods launch on every node of the cluster
Oct 22 19:47:39.281: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:47:39.286: INFO: Number of nodes with available pods: 0
Oct 22 19:47:39.286: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 19:47:40.312: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:47:40.315: INFO: Number of nodes with available pods: 0
Oct 22 19:47:40.315: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 19:47:41.291: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:47:41.294: INFO: Number of nodes with available pods: 0
Oct 22 19:47:41.295: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 19:47:42.335: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:47:42.338: INFO: Number of nodes with available pods: 0
Oct 22 19:47:42.338: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 19:47:43.291: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:47:43.295: INFO: Number of nodes with available pods: 0
Oct 22 19:47:43.295: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 19:47:44.299: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:47:44.302: INFO: Number of nodes with available pods: 2
Oct 22 19:47:44.302: INFO: Number of running nodes: 2, number of available pods: 2
Oct 22 19:47:44.302: INFO: Update the DaemonSet to trigger a rollout
Oct 22 19:47:44.309: INFO: Updating DaemonSet daemon-set
Oct 22 19:47:56.439: INFO: Roll back the DaemonSet before rollout is complete
Oct 22 19:47:56.445: INFO: Updating DaemonSet daemon-set
Oct 22 19:47:56.446: INFO: Make sure DaemonSet rollback is complete
Oct 22 19:47:56.449: INFO: Wrong image for pod: daemon-set-szscn. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Oct 22 19:47:56.449: INFO: Pod daemon-set-szscn is not available
Oct 22 19:47:56.456: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:47:57.460: INFO: Wrong image for pod: daemon-set-szscn. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Oct 22 19:47:57.460: INFO: Pod daemon-set-szscn is not available
Oct 22 19:47:57.465: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:47:58.474: INFO: Pod daemon-set-zskcr is not available
Oct 22 19:47:58.535: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 19:47:59.466: INFO: Pod daemon-set-zskcr is not available
Oct 22 19:47:59.470: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1178, will wait for the garbage collector to delete the pods
Oct 22 19:47:59.534: INFO: Deleting DaemonSet.extensions daemon-set took: 8.5352ms
Oct 22 19:47:59.834: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.242626ms
Oct 22 19:48:05.437: INFO: Number of nodes with available pods: 0
Oct 22 19:48:05.437: INFO: Number of running nodes: 0, number of available pods: 0
Oct 22 19:48:05.439: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1178/daemonsets","resourceVersion":"5316947"},"items":null}

Oct 22 19:48:05.442: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1178/pods","resourceVersion":"5316947"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:48:05.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1178" for this suite.
Oct 22 19:48:11.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:48:11.553: INFO: namespace daemonsets-1178 deletion completed in 6.098154121s

• [SLOW TEST:32.371 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:48:11.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9491
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-9491
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9491
Oct 22 19:48:11.627: INFO: Found 0 stateful pods, waiting for 1
Oct 22 19:48:21.633: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Oct 22 19:48:21.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9491 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Oct 22 19:48:21.896: INFO: stderr: "I1022 19:48:21.770445    2736 log.go:172] (0xc000914370) (0xc0003ce820) Create stream\nI1022 19:48:21.770512    2736 log.go:172] (0xc000914370) (0xc0003ce820) Stream added, broadcasting: 1\nI1022 19:48:21.773639    2736 log.go:172] (0xc000914370) Reply frame received for 1\nI1022 19:48:21.773668    2736 log.go:172] (0xc000914370) (0xc0003ce8c0) Create stream\nI1022 19:48:21.773677    2736 log.go:172] (0xc000914370) (0xc0003ce8c0) Stream added, broadcasting: 3\nI1022 19:48:21.774810    2736 log.go:172] (0xc000914370) Reply frame received for 3\nI1022 19:48:21.774885    2736 log.go:172] (0xc000914370) (0xc00068c460) Create stream\nI1022 19:48:21.774928    2736 log.go:172] (0xc000914370) (0xc00068c460) Stream added, broadcasting: 5\nI1022 19:48:21.776163    2736 log.go:172] (0xc000914370) Reply frame received for 5\nI1022 19:48:21.856128    2736 log.go:172] (0xc000914370) Data frame received for 5\nI1022 19:48:21.856161    2736 log.go:172] (0xc00068c460) (5) Data frame handling\nI1022 19:48:21.856182    2736 log.go:172] (0xc00068c460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1022 19:48:21.884346    2736 log.go:172] (0xc000914370) Data frame received for 3\nI1022 19:48:21.884391    2736 log.go:172] (0xc0003ce8c0) (3) Data frame handling\nI1022 19:48:21.884407    2736 log.go:172] (0xc0003ce8c0) (3) Data frame sent\nI1022 19:48:21.884418    2736 log.go:172] (0xc000914370) Data frame received for 3\nI1022 19:48:21.884428    2736 log.go:172] (0xc0003ce8c0) (3) Data frame handling\nI1022 19:48:21.884563    2736 log.go:172] (0xc000914370) Data frame received for 5\nI1022 19:48:21.884590    2736 log.go:172] (0xc00068c460) (5) Data frame handling\nI1022 19:48:21.886214    2736 log.go:172] (0xc000914370) Data frame received for 1\nI1022 19:48:21.886247    2736 log.go:172] (0xc0003ce820) (1) Data frame handling\nI1022 19:48:21.886274    2736 log.go:172] (0xc0003ce820) (1) Data frame sent\nI1022 19:48:21.886319    2736 log.go:172] (0xc000914370) (0xc0003ce820) Stream removed, broadcasting: 1\nI1022 19:48:21.886354    2736 log.go:172] (0xc000914370) Go away received\nI1022 19:48:21.887776    2736 log.go:172] (0xc000914370) (0xc0003ce820) Stream removed, broadcasting: 1\nI1022 19:48:21.887825    2736 log.go:172] (0xc000914370) (0xc0003ce8c0) Stream removed, broadcasting: 3\nI1022 19:48:21.887847    2736 log.go:172] (0xc000914370) (0xc00068c460) Stream removed, broadcasting: 5\n"
Oct 22 19:48:21.896: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Oct 22 19:48:21.896: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Oct 22 19:48:21.899: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Oct 22 19:48:31.942: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Oct 22 19:48:31.942: INFO: Waiting for statefulset status.replicas updated to 0
Oct 22 19:48:31.984: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999643s
Oct 22 19:48:32.989: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.968518611s
Oct 22 19:48:33.994: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.963810676s
Oct 22 19:48:34.998: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.958634788s
Oct 22 19:48:36.002: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.954453441s
Oct 22 19:48:37.007: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.950245984s
Oct 22 19:48:38.025: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.945599689s
Oct 22 19:48:39.029: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.927483198s
Oct 22 19:48:40.034: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.923052364s
Oct 22 19:48:41.039: INFO: Verifying statefulset ss doesn't scale past 1 for another 918.783637ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9491
Oct 22 19:48:42.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9491 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:48:42.259: INFO: stderr: "I1022 19:48:42.168459    2756 log.go:172] (0xc00093e000) (0xc0006aa1e0) Create stream\nI1022 19:48:42.168507    2756 log.go:172] (0xc00093e000) (0xc0006aa1e0) Stream added, broadcasting: 1\nI1022 19:48:42.170213    2756 log.go:172] (0xc00093e000) Reply frame received for 1\nI1022 19:48:42.170252    2756 log.go:172] (0xc00093e000) (0xc0009420a0) Create stream\nI1022 19:48:42.170264    2756 log.go:172] (0xc00093e000) (0xc0009420a0) Stream added, broadcasting: 3\nI1022 19:48:42.170976    2756 log.go:172] (0xc00093e000) Reply frame received for 3\nI1022 19:48:42.171010    2756 log.go:172] (0xc00093e000) (0xc000278000) Create stream\nI1022 19:48:42.171019    2756 log.go:172] (0xc00093e000) (0xc000278000) Stream added, broadcasting: 5\nI1022 19:48:42.171665    2756 log.go:172] (0xc00093e000) Reply frame received for 5\nI1022 19:48:42.251995    2756 log.go:172] (0xc00093e000) Data frame received for 3\nI1022 19:48:42.252028    2756 log.go:172] (0xc0009420a0) (3) Data frame handling\nI1022 19:48:42.252038    2756 log.go:172] (0xc0009420a0) (3) Data frame sent\nI1022 19:48:42.252053    2756 log.go:172] (0xc00093e000) Data frame received for 3\nI1022 19:48:42.252067    2756 log.go:172] (0xc0009420a0) (3) Data frame handling\nI1022 19:48:42.252152    2756 log.go:172] (0xc00093e000) Data frame received for 5\nI1022 19:48:42.252204    2756 log.go:172] (0xc000278000) (5) Data frame handling\nI1022 19:48:42.252227    2756 log.go:172] (0xc000278000) (5) Data frame sent\nI1022 19:48:42.252245    2756 log.go:172] (0xc00093e000) Data frame received for 5\nI1022 19:48:42.252253    2756 log.go:172] (0xc000278000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1022 19:48:42.253534    2756 log.go:172] (0xc00093e000) Data frame received for 1\nI1022 19:48:42.253553    2756 log.go:172] (0xc0006aa1e0) (1) Data frame handling\nI1022 19:48:42.253574    2756 log.go:172] (0xc0006aa1e0) (1) Data frame sent\nI1022 19:48:42.253594    2756 log.go:172] (0xc00093e000) (0xc0006aa1e0) Stream removed, broadcasting: 1\nI1022 19:48:42.253643    2756 log.go:172] (0xc00093e000) Go away received\nI1022 19:48:42.253865    2756 log.go:172] (0xc00093e000) (0xc0006aa1e0) Stream removed, broadcasting: 1\nI1022 19:48:42.253877    2756 log.go:172] (0xc00093e000) (0xc0009420a0) Stream removed, broadcasting: 3\nI1022 19:48:42.253881    2756 log.go:172] (0xc00093e000) (0xc000278000) Stream removed, broadcasting: 5\n"
Oct 22 19:48:42.260: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Oct 22 19:48:42.260: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Oct 22 19:48:42.264: INFO: Found 1 stateful pods, waiting for 3
Oct 22 19:48:52.269: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:48:52.269: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:48:52.269: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Oct 22 19:48:52.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9491 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Oct 22 19:48:52.525: INFO: stderr: "I1022 19:48:52.408746    2776 log.go:172] (0xc000a52370) (0xc0003ea6e0) Create stream\nI1022 19:48:52.408792    2776 log.go:172] (0xc000a52370) (0xc0003ea6e0) Stream added, broadcasting: 1\nI1022 19:48:52.413136    2776 log.go:172] (0xc000a52370) Reply frame received for 1\nI1022 19:48:52.413176    2776 log.go:172] (0xc000a52370) (0xc0004280a0) Create stream\nI1022 19:48:52.413191    2776 log.go:172] (0xc000a52370) (0xc0004280a0) Stream added, broadcasting: 3\nI1022 19:48:52.414235    2776 log.go:172] (0xc000a52370) Reply frame received for 3\nI1022 19:48:52.414262    2776 log.go:172] (0xc000a52370) (0xc0003ea000) Create stream\nI1022 19:48:52.414270    2776 log.go:172] (0xc000a52370) (0xc0003ea000) Stream added, broadcasting: 5\nI1022 19:48:52.415323    2776 log.go:172] (0xc000a52370) Reply frame received for 5\nI1022 19:48:52.517512    2776 log.go:172] (0xc000a52370) Data frame received for 5\nI1022 19:48:52.517538    2776 log.go:172] (0xc0003ea000) (5) Data frame handling\nI1022 19:48:52.517548    2776 log.go:172] (0xc0003ea000) (5) Data frame sent\nI1022 19:48:52.517556    2776 log.go:172] (0xc000a52370) Data frame received for 5\nI1022 19:48:52.517562    2776 log.go:172] (0xc0003ea000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1022 19:48:52.517583    2776 log.go:172] (0xc000a52370) Data frame received for 3\nI1022 19:48:52.517607    2776 log.go:172] (0xc0004280a0) (3) Data frame handling\nI1022 19:48:52.517621    2776 log.go:172] (0xc0004280a0) (3) Data frame sent\nI1022 19:48:52.517627    2776 log.go:172] (0xc000a52370) Data frame received for 3\nI1022 19:48:52.517631    2776 log.go:172] (0xc0004280a0) (3) Data frame handling\nI1022 19:48:52.519173    2776 log.go:172] (0xc000a52370) Data frame received for 1\nI1022 19:48:52.519193    2776 log.go:172] (0xc0003ea6e0) (1) Data frame handling\nI1022 19:48:52.519203    2776 log.go:172] (0xc0003ea6e0) (1) Data frame sent\nI1022 19:48:52.519219    2776 log.go:172] (0xc000a52370) (0xc0003ea6e0) Stream removed, broadcasting: 1\nI1022 19:48:52.519233    2776 log.go:172] (0xc000a52370) Go away received\nI1022 19:48:52.519639    2776 log.go:172] (0xc000a52370) (0xc0003ea6e0) Stream removed, broadcasting: 1\nI1022 19:48:52.519662    2776 log.go:172] (0xc000a52370) (0xc0004280a0) Stream removed, broadcasting: 3\nI1022 19:48:52.519676    2776 log.go:172] (0xc000a52370) (0xc0003ea000) Stream removed, broadcasting: 5\n"
Oct 22 19:48:52.525: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Oct 22 19:48:52.525: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Oct 22 19:48:52.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9491 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Oct 22 19:48:52.766: INFO: stderr: "I1022 19:48:52.643682    2797 log.go:172] (0xc000a4e160) (0xc0007fc140) Create stream\nI1022 19:48:52.643726    2797 log.go:172] (0xc000a4e160) (0xc0007fc140) Stream added, broadcasting: 1\nI1022 19:48:52.645906    2797 log.go:172] (0xc000a4e160) Reply frame received for 1\nI1022 19:48:52.645949    2797 log.go:172] (0xc000a4e160) (0xc0007fc280) Create stream\nI1022 19:48:52.645983    2797 log.go:172] (0xc000a4e160) (0xc0007fc280) Stream added, broadcasting: 3\nI1022 19:48:52.647081    2797 log.go:172] (0xc000a4e160) Reply frame received for 3\nI1022 19:48:52.647127    2797 log.go:172] (0xc000a4e160) (0xc0006661e0) Create stream\nI1022 19:48:52.647138    2797 log.go:172] (0xc000a4e160) (0xc0006661e0) Stream added, broadcasting: 5\nI1022 19:48:52.648001    2797 log.go:172] (0xc000a4e160) Reply frame received for 5\nI1022 19:48:52.709306    2797 log.go:172] (0xc000a4e160) Data frame received for 5\nI1022 19:48:52.709337    2797 log.go:172] (0xc0006661e0) (5) Data frame handling\nI1022 19:48:52.709352    2797 log.go:172] (0xc0006661e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1022 19:48:52.758147    2797 log.go:172] (0xc000a4e160) Data frame received for 3\nI1022 19:48:52.758200    2797 log.go:172] (0xc0007fc280) (3) Data frame handling\nI1022 19:48:52.758226    2797 log.go:172] (0xc000a4e160) Data frame received for 5\nI1022 19:48:52.758250    2797 log.go:172] (0xc0006661e0) (5) Data frame handling\nI1022 19:48:52.758306    2797 log.go:172] (0xc0007fc280) (3) Data frame sent\nI1022 19:48:52.758383    2797 log.go:172] (0xc000a4e160) Data frame received for 3\nI1022 19:48:52.758398    2797 log.go:172] (0xc0007fc280) (3) Data frame handling\nI1022 19:48:52.760565    2797 log.go:172] (0xc000a4e160) Data frame received for 1\nI1022 19:48:52.760596    2797 log.go:172] (0xc0007fc140) (1) Data frame handling\nI1022 19:48:52.760630    2797 log.go:172] (0xc0007fc140) (1) Data frame sent\nI1022 19:48:52.760660    2797 log.go:172] (0xc000a4e160) (0xc0007fc140) Stream removed, broadcasting: 1\nI1022 19:48:52.760688    2797 log.go:172] (0xc000a4e160) Go away received\nI1022 19:48:52.761288    2797 log.go:172] (0xc000a4e160) (0xc0007fc140) Stream removed, broadcasting: 1\nI1022 19:48:52.761314    2797 log.go:172] (0xc000a4e160) (0xc0007fc280) Stream removed, broadcasting: 3\nI1022 19:48:52.761327    2797 log.go:172] (0xc000a4e160) (0xc0006661e0) Stream removed, broadcasting: 5\n"
Oct 22 19:48:52.766: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Oct 22 19:48:52.766: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Oct 22 19:48:52.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9491 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Oct 22 19:48:53.047: INFO: stderr: "I1022 19:48:52.911012    2818 log.go:172] (0xc00010c8f0) (0xc0002b4be0) Create stream\nI1022 19:48:52.911065    2818 log.go:172] (0xc00010c8f0) (0xc0002b4be0) Stream added, broadcasting: 1\nI1022 19:48:52.915088    2818 log.go:172] (0xc00010c8f0) Reply frame received for 1\nI1022 19:48:52.915132    2818 log.go:172] (0xc00010c8f0) (0xc0002b4460) Create stream\nI1022 19:48:52.915149    2818 log.go:172] (0xc00010c8f0) (0xc0002b4460) Stream added, broadcasting: 3\nI1022 19:48:52.916123    2818 log.go:172] (0xc00010c8f0) Reply frame received for 3\nI1022 19:48:52.916155    2818 log.go:172] (0xc00010c8f0) (0xc0003da000) Create stream\nI1022 19:48:52.916166    2818 log.go:172] (0xc00010c8f0) (0xc0003da000) Stream added, broadcasting: 5\nI1022 19:48:52.917156    2818 log.go:172] (0xc00010c8f0) Reply frame received for 5\nI1022 19:48:52.983089    2818 log.go:172] (0xc00010c8f0) Data frame received for 5\nI1022 19:48:52.983124    2818 log.go:172] (0xc0003da000) (5) Data frame handling\nI1022 19:48:52.983143    2818 log.go:172] (0xc0003da000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1022 19:48:53.038484    2818 log.go:172] (0xc00010c8f0) Data frame received for 3\nI1022 19:48:53.038523    2818 log.go:172] (0xc00010c8f0) Data frame received for 5\nI1022 19:48:53.038574    2818 log.go:172] (0xc0003da000) (5) Data frame handling\nI1022 19:48:53.038617    2818 log.go:172] (0xc0002b4460) (3) Data frame handling\nI1022 19:48:53.038671    2818 log.go:172] (0xc0002b4460) (3) Data frame sent\nI1022 19:48:53.038695    2818 log.go:172] (0xc00010c8f0) Data frame received for 3\nI1022 19:48:53.038718    2818 log.go:172] (0xc0002b4460) (3) Data frame handling\nI1022 19:48:53.041014    2818 log.go:172] (0xc00010c8f0) Data frame received for 1\nI1022 19:48:53.041077    2818 log.go:172] (0xc0002b4be0) (1) Data frame handling\nI1022 19:48:53.041119    2818 log.go:172] (0xc0002b4be0) (1) Data frame sent\nI1022 19:48:53.041176    2818 log.go:172] (0xc00010c8f0) (0xc0002b4be0) Stream removed, broadcasting: 1\nI1022 19:48:53.041201    2818 log.go:172] (0xc00010c8f0) Go away received\nI1022 19:48:53.041575    2818 log.go:172] (0xc00010c8f0) (0xc0002b4be0) Stream removed, broadcasting: 1\nI1022 19:48:53.041594    2818 log.go:172] (0xc00010c8f0) (0xc0002b4460) Stream removed, broadcasting: 3\nI1022 19:48:53.041601    2818 log.go:172] (0xc00010c8f0) (0xc0003da000) Stream removed, broadcasting: 5\n"
Oct 22 19:48:53.047: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Oct 22 19:48:53.047: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Oct 22 19:48:53.047: INFO: Waiting for statefulset status.replicas updated to 0
Oct 22 19:48:53.050: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Oct 22 19:49:03.059: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Oct 22 19:49:03.059: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Oct 22 19:49:03.059: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Oct 22 19:49:03.086: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999395s
Oct 22 19:49:04.091: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980759606s
Oct 22 19:49:05.096: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97557831s
Oct 22 19:49:06.100: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.971011617s
Oct 22 19:49:07.105: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.966321424s
Oct 22 19:49:08.109: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.962186931s
Oct 22 19:49:09.114: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.957656831s
Oct 22 19:49:10.119: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.953085779s
Oct 22 19:49:11.124: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.94806138s
Oct 22 19:49:12.129: INFO: Verifying statefulset ss doesn't scale past 3 for another 942.787399ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9491
Oct 22 19:49:13.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9491 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:49:13.377: INFO: stderr: "I1022 19:49:13.265273    2838 log.go:172] (0xc000134fd0) (0xc0005bcbe0) Create stream\nI1022 19:49:13.265333    2838 log.go:172] (0xc000134fd0) (0xc0005bcbe0) Stream added, broadcasting: 1\nI1022 19:49:13.279345    2838 log.go:172] (0xc000134fd0) Reply frame received for 1\nI1022 19:49:13.279397    2838 log.go:172] (0xc000134fd0) (0xc0005bc320) Create stream\nI1022 19:49:13.279412    2838 log.go:172] (0xc000134fd0) (0xc0005bc320) Stream added, broadcasting: 3\nI1022 19:49:13.280234    2838 log.go:172] (0xc000134fd0) Reply frame received for 3\nI1022 19:49:13.280273    2838 log.go:172] (0xc000134fd0) (0xc00022a000) Create stream\nI1022 19:49:13.280282    2838 log.go:172] (0xc000134fd0) (0xc00022a000) Stream added, broadcasting: 5\nI1022 19:49:13.281246    2838 log.go:172] (0xc000134fd0) Reply frame received for 5\nI1022 19:49:13.370078    2838 log.go:172] (0xc000134fd0) Data frame received for 5\nI1022 19:49:13.370123    2838 log.go:172] (0xc00022a000) (5) Data frame handling\nI1022 19:49:13.370137    2838 log.go:172] (0xc00022a000) (5) Data frame sent\nI1022 19:49:13.370148    2838 log.go:172] (0xc000134fd0) Data frame received for 5\nI1022 19:49:13.370156    2838 log.go:172] (0xc00022a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1022 19:49:13.370183    2838 log.go:172] (0xc000134fd0) Data frame received for 3\nI1022 19:49:13.370192    2838 log.go:172] (0xc0005bc320) (3) Data frame handling\nI1022 19:49:13.370210    2838 log.go:172] (0xc0005bc320) (3) Data frame sent\nI1022 19:49:13.370221    2838 log.go:172] (0xc000134fd0) Data frame received for 3\nI1022 19:49:13.370229    2838 log.go:172] (0xc0005bc320) (3) Data frame handling\nI1022 19:49:13.371282    2838 log.go:172] (0xc000134fd0) Data frame received for 1\nI1022 19:49:13.371315    2838 log.go:172] (0xc0005bcbe0) (1) Data frame handling\nI1022 19:49:13.371330    2838 log.go:172] (0xc0005bcbe0) (1) Data frame sent\nI1022 19:49:13.371347    2838 log.go:172] (0xc000134fd0) (0xc0005bcbe0) Stream removed, broadcasting: 1\nI1022 19:49:13.371376    2838 log.go:172] (0xc000134fd0) Go away received\nI1022 19:49:13.371724    2838 log.go:172] (0xc000134fd0) (0xc0005bcbe0) Stream removed, broadcasting: 1\nI1022 19:49:13.371739    2838 log.go:172] (0xc000134fd0) (0xc0005bc320) Stream removed, broadcasting: 3\nI1022 19:49:13.371746    2838 log.go:172] (0xc000134fd0) (0xc00022a000) Stream removed, broadcasting: 5\n"
Oct 22 19:49:13.377: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Oct 22 19:49:13.377: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Oct 22 19:49:13.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9491 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:49:13.588: INFO: stderr: "I1022 19:49:13.513972    2860 log.go:172] (0xc000a8a420) (0xc0009e2640) Create stream\nI1022 19:49:13.514044    2860 log.go:172] (0xc000a8a420) (0xc0009e2640) Stream added, broadcasting: 1\nI1022 19:49:13.516492    2860 log.go:172] (0xc000a8a420) Reply frame received for 1\nI1022 19:49:13.516548    2860 log.go:172] (0xc000a8a420) (0xc000916000) Create stream\nI1022 19:49:13.516563    2860 log.go:172] (0xc000a8a420) (0xc000916000) Stream added, broadcasting: 3\nI1022 19:49:13.517643    2860 log.go:172] (0xc000a8a420) Reply frame received for 3\nI1022 19:49:13.517677    2860 log.go:172] (0xc000a8a420) (0xc000698320) Create stream\nI1022 19:49:13.517702    2860 log.go:172] (0xc000a8a420) (0xc000698320) Stream added, broadcasting: 5\nI1022 19:49:13.518605    2860 log.go:172] (0xc000a8a420) Reply frame received for 5\nI1022 19:49:13.580621    2860 log.go:172] (0xc000a8a420) Data frame received for 3\nI1022 19:49:13.580667    2860 log.go:172] (0xc000916000) (3) Data frame handling\nI1022 19:49:13.580682    2860 log.go:172] (0xc000916000) (3) Data frame sent\nI1022 19:49:13.580694    2860 log.go:172] (0xc000a8a420) Data frame received for 3\nI1022 19:49:13.580704    2860 log.go:172] (0xc000916000) (3) Data frame handling\nI1022 19:49:13.580740    2860 log.go:172] (0xc000a8a420) Data frame received for 5\nI1022 19:49:13.580758    2860 log.go:172] (0xc000698320) (5) Data frame handling\nI1022 19:49:13.580780    2860 log.go:172] (0xc000698320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1022 19:49:13.580799    2860 log.go:172] (0xc000a8a420) Data frame received for 5\nI1022 19:49:13.581029    2860 log.go:172] (0xc000698320) (5) Data frame handling\nI1022 19:49:13.582135    2860 log.go:172] (0xc000a8a420) Data frame received for 1\nI1022 19:49:13.582168    2860 log.go:172] (0xc0009e2640) (1) Data frame handling\nI1022 19:49:13.582181    2860 log.go:172] (0xc0009e2640) (1) Data frame sent\nI1022 19:49:13.582194    2860 log.go:172] (0xc000a8a420) (0xc0009e2640) Stream removed, broadcasting: 1\nI1022 19:49:13.582213    2860 log.go:172] (0xc000a8a420) Go away received\nI1022 19:49:13.582507    2860 log.go:172] (0xc000a8a420) (0xc0009e2640) Stream removed, broadcasting: 1\nI1022 19:49:13.582520    2860 log.go:172] (0xc000a8a420) (0xc000916000) Stream removed, broadcasting: 3\nI1022 19:49:13.582526    2860 log.go:172] (0xc000a8a420) (0xc000698320) Stream removed, broadcasting: 5\n"
Oct 22 19:49:13.589: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Oct 22 19:49:13.589: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Oct 22 19:49:13.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9491 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:49:13.812: INFO: stderr: "I1022 19:49:13.738993    2883 log.go:172] (0xc000ace420) (0xc00093c8c0) Create stream\nI1022 19:49:13.739068    2883 log.go:172] (0xc000ace420) (0xc00093c8c0) Stream added, broadcasting: 1\nI1022 19:49:13.741927    2883 log.go:172] (0xc000ace420) Reply frame received for 1\nI1022 19:49:13.741967    2883 log.go:172] (0xc000ace420) (0xc00093c000) Create stream\nI1022 19:49:13.741988    2883 log.go:172] (0xc000ace420) (0xc00093c000) Stream added, broadcasting: 3\nI1022 19:49:13.742760    2883 log.go:172] (0xc000ace420) Reply frame received for 3\nI1022 19:49:13.742816    2883 log.go:172] (0xc000ace420) (0xc00068a280) Create stream\nI1022 19:49:13.742833    2883 log.go:172] (0xc000ace420) (0xc00068a280) Stream added, broadcasting: 5\nI1022 19:49:13.743480    2883 log.go:172] (0xc000ace420) Reply frame received for 5\nI1022 19:49:13.806805    2883 log.go:172] (0xc000ace420) Data frame received for 3\nI1022 19:49:13.806837    2883 log.go:172] (0xc00093c000) (3) Data frame handling\nI1022 19:49:13.806859    2883 log.go:172] (0xc00093c000) (3) Data frame sent\nI1022 19:49:13.806869    2883 log.go:172] (0xc000ace420) Data frame received for 3\nI1022 19:49:13.806878    2883 log.go:172] (0xc00093c000) (3) Data frame handling\nI1022 19:49:13.807051    2883 log.go:172] (0xc000ace420) Data frame received for 5\nI1022 19:49:13.807075    2883 log.go:172] (0xc00068a280) (5) Data frame handling\nI1022 19:49:13.807097    2883 log.go:172] (0xc00068a280) (5) Data frame sent\nI1022 19:49:13.807112    2883 log.go:172] (0xc000ace420) Data frame received for 5\nI1022 19:49:13.807122    2883 log.go:172] (0xc00068a280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1022 19:49:13.808186    2883 log.go:172] (0xc000ace420) Data frame received for 1\nI1022 19:49:13.808197    2883 log.go:172] (0xc00093c8c0) (1) Data frame handling\nI1022 19:49:13.808210    2883 log.go:172] (0xc00093c8c0) (1) Data frame sent\nI1022 19:49:13.808307    2883 log.go:172] (0xc000ace420) (0xc00093c8c0) Stream removed, broadcasting: 1\nI1022 19:49:13.808563    2883 log.go:172] (0xc000ace420) (0xc00093c8c0) Stream removed, broadcasting: 1\nI1022 19:49:13.808584    2883 log.go:172] (0xc000ace420) (0xc00093c000) Stream removed, broadcasting: 3\nI1022 19:49:13.808592    2883 log.go:172] (0xc000ace420) (0xc00068a280) Stream removed, broadcasting: 5\n"
Oct 22 19:49:13.813: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Oct 22 19:49:13.813: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Oct 22 19:49:13.813: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Oct 22 19:49:33.845: INFO: Deleting all statefulset in ns statefulset-9491
Oct 22 19:49:33.847: INFO: Scaling statefulset ss to 0
Oct 22 19:49:33.856: INFO: Waiting for statefulset status.replicas updated to 0
Oct 22 19:49:33.859: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:49:33.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9491" for this suite.
Oct 22 19:49:39.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:49:39.962: INFO: namespace statefulset-9491 deletion completed in 6.081974946s

• [SLOW TEST:88.409 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:49:39.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 19:49:40.028: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39e2d68e-410e-4360-8b88-c61a2e17cf31" in namespace "projected-6145" to be "success or failure"
Oct 22 19:49:40.031: INFO: Pod "downwardapi-volume-39e2d68e-410e-4360-8b88-c61a2e17cf31": Phase="Pending", Reason="", readiness=false. Elapsed: 3.343118ms
Oct 22 19:49:42.086: INFO: Pod "downwardapi-volume-39e2d68e-410e-4360-8b88-c61a2e17cf31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057679727s
Oct 22 19:49:44.090: INFO: Pod "downwardapi-volume-39e2d68e-410e-4360-8b88-c61a2e17cf31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062315372s
Oct 22 19:49:46.095: INFO: Pod "downwardapi-volume-39e2d68e-410e-4360-8b88-c61a2e17cf31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066691177s
STEP: Saw pod success
Oct 22 19:49:46.095: INFO: Pod "downwardapi-volume-39e2d68e-410e-4360-8b88-c61a2e17cf31" satisfied condition "success or failure"
Oct 22 19:49:46.098: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-39e2d68e-410e-4360-8b88-c61a2e17cf31 container client-container: 
STEP: delete the pod
Oct 22 19:49:46.146: INFO: Waiting for pod downwardapi-volume-39e2d68e-410e-4360-8b88-c61a2e17cf31 to disappear
Oct 22 19:49:46.157: INFO: Pod downwardapi-volume-39e2d68e-410e-4360-8b88-c61a2e17cf31 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:49:46.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6145" for this suite.
Oct 22 19:49:52.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:49:52.297: INFO: namespace projected-6145 deletion completed in 6.135463498s

• [SLOW TEST:12.334 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:49:52.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2935
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Oct 22 19:49:52.415: INFO: Found 0 stateful pods, waiting for 3
Oct 22 19:50:02.420: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:50:02.420: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:50:02.420: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Oct 22 19:50:12.421: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:50:12.421: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:50:12.421: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Oct 22 19:50:12.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2935 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Oct 22 19:50:12.712: INFO: stderr: "I1022 19:50:12.572052    2903 log.go:172] (0xc000714d10) (0xc0006ac960) Create stream\nI1022 19:50:12.572109    2903 log.go:172] (0xc000714d10) (0xc0006ac960) Stream added, broadcasting: 1\nI1022 19:50:12.574384    2903 log.go:172] (0xc000714d10) Reply frame received for 1\nI1022 19:50:12.574427    2903 log.go:172] (0xc000714d10) (0xc0008ec000) Create stream\nI1022 19:50:12.574441    2903 log.go:172] (0xc000714d10) (0xc0008ec000) Stream added, broadcasting: 3\nI1022 19:50:12.575230    2903 log.go:172] (0xc000714d10) Reply frame received for 3\nI1022 19:50:12.575282    2903 log.go:172] (0xc000714d10) (0xc0006e2000) Create stream\nI1022 19:50:12.575309    2903 log.go:172] (0xc000714d10) (0xc0006e2000) Stream added, broadcasting: 5\nI1022 19:50:12.576189    2903 log.go:172] (0xc000714d10) Reply frame received for 5\nI1022 19:50:12.662124    2903 log.go:172] (0xc000714d10) Data frame received for 5\nI1022 19:50:12.662152    2903 log.go:172] (0xc0006e2000) (5) Data frame handling\nI1022 19:50:12.662168    2903 log.go:172] (0xc0006e2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1022 19:50:12.702234    2903 log.go:172] (0xc000714d10) Data frame received for 5\nI1022 19:50:12.702259    2903 log.go:172] (0xc0006e2000) (5) Data frame handling\nI1022 19:50:12.702311    2903 log.go:172] (0xc000714d10) Data frame received for 3\nI1022 19:50:12.702359    2903 log.go:172] (0xc0008ec000) (3) Data frame handling\nI1022 19:50:12.702396    2903 log.go:172] (0xc0008ec000) (3) Data frame sent\nI1022 19:50:12.702416    2903 log.go:172] (0xc000714d10) Data frame received for 3\nI1022 19:50:12.702429    2903 log.go:172] (0xc0008ec000) (3) Data frame handling\nI1022 19:50:12.704487    2903 log.go:172] (0xc000714d10) Data frame received for 1\nI1022 19:50:12.704517    2903 log.go:172] (0xc0006ac960) (1) Data frame handling\nI1022 19:50:12.704545    2903 log.go:172] (0xc0006ac960) (1) Data frame sent\nI1022 19:50:12.704574    2903 log.go:172] (0xc000714d10) (0xc0006ac960) Stream removed, broadcasting: 1\nI1022 19:50:12.704599    2903 log.go:172] (0xc000714d10) Go away received\nI1022 19:50:12.707363    2903 log.go:172] (0xc000714d10) (0xc0006ac960) Stream removed, broadcasting: 1\nI1022 19:50:12.707394    2903 log.go:172] (0xc000714d10) (0xc0008ec000) Stream removed, broadcasting: 3\nI1022 19:50:12.707410    2903 log.go:172] (0xc000714d10) (0xc0006e2000) Stream removed, broadcasting: 5\n"
Oct 22 19:50:12.712: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Oct 22 19:50:12.712: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Oct 22 19:50:22.748: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Oct 22 19:50:32.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2935 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:50:33.000: INFO: stderr: "I1022 19:50:32.907196    2922 log.go:172] (0xc000128fd0) (0xc000322820) Create stream\nI1022 19:50:32.907238    2922 log.go:172] (0xc000128fd0) (0xc000322820) Stream added, broadcasting: 1\nI1022 19:50:32.909187    2922 log.go:172] (0xc000128fd0) Reply frame received for 1\nI1022 19:50:32.909222    2922 log.go:172] (0xc000128fd0) (0xc000594320) Create stream\nI1022 19:50:32.909232    2922 log.go:172] (0xc000128fd0) (0xc000594320) Stream added, broadcasting: 3\nI1022 19:50:32.910264    2922 log.go:172] (0xc000128fd0) Reply frame received for 3\nI1022 19:50:32.910308    2922 log.go:172] (0xc000128fd0) (0xc0003228c0) Create stream\nI1022 19:50:32.910323    2922 log.go:172] (0xc000128fd0) (0xc0003228c0) Stream added, broadcasting: 5\nI1022 19:50:32.911207    2922 log.go:172] (0xc000128fd0) Reply frame received for 5\nI1022 19:50:32.992061    2922 log.go:172] (0xc000128fd0) Data frame received for 5\nI1022 19:50:32.992081    2922 log.go:172] (0xc0003228c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1022 19:50:32.992106    2922 log.go:172] (0xc000128fd0) Data frame received for 3\nI1022 19:50:32.992140    2922 log.go:172] (0xc000594320) (3) Data frame handling\nI1022 19:50:32.992152    2922 log.go:172] (0xc000594320) (3) Data frame sent\nI1022 19:50:32.992160    2922 log.go:172] (0xc000128fd0) Data frame received for 3\nI1022 19:50:32.992167    2922 log.go:172] (0xc000594320) (3) Data frame handling\nI1022 19:50:32.992200    2922 log.go:172] (0xc0003228c0) (5) Data frame sent\nI1022 19:50:32.992213    2922 log.go:172] (0xc000128fd0) Data frame received for 5\nI1022 19:50:32.992225    2922 log.go:172] (0xc0003228c0) (5) Data frame handling\nI1022 19:50:32.994206    2922 log.go:172] (0xc000128fd0) Data frame received for 1\nI1022 19:50:32.994229    2922 log.go:172] (0xc000322820) (1) Data frame handling\nI1022 19:50:32.994251    2922 log.go:172] (0xc000322820) (1) Data frame sent\nI1022 19:50:32.994268    2922 log.go:172] (0xc000128fd0) (0xc000322820) Stream removed, broadcasting: 1\nI1022 19:50:32.994284    2922 log.go:172] (0xc000128fd0) Go away received\nI1022 19:50:32.994725    2922 log.go:172] (0xc000128fd0) (0xc000322820) Stream removed, broadcasting: 1\nI1022 19:50:32.994752    2922 log.go:172] (0xc000128fd0) (0xc000594320) Stream removed, broadcasting: 3\nI1022 19:50:32.994769    2922 log.go:172] (0xc000128fd0) (0xc0003228c0) Stream removed, broadcasting: 5\n"
Oct 22 19:50:33.000: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Oct 22 19:50:33.000: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Oct 22 19:50:43.021: INFO: Waiting for StatefulSet statefulset-2935/ss2 to complete update
Oct 22 19:50:43.021: INFO: Waiting for Pod statefulset-2935/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Oct 22 19:50:43.021: INFO: Waiting for Pod statefulset-2935/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Oct 22 19:50:43.021: INFO: Waiting for Pod statefulset-2935/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Oct 22 19:50:55.999: INFO: Waiting for StatefulSet statefulset-2935/ss2 to complete update
Oct 22 19:50:55.999: INFO: Waiting for Pod statefulset-2935/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Oct 22 19:51:03.027: INFO: Waiting for StatefulSet statefulset-2935/ss2 to complete update
Oct 22 19:51:03.027: INFO: Waiting for Pod statefulset-2935/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Oct 22 19:51:13.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2935 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Oct 22 19:51:15.871: INFO: stderr: "I1022 19:51:15.706627    2944 log.go:172] (0xc00010c6e0) (0xc0007b2780) Create stream\nI1022 19:51:15.706686    2944 log.go:172] (0xc00010c6e0) (0xc0007b2780) Stream added, broadcasting: 1\nI1022 19:51:15.709982    2944 log.go:172] (0xc00010c6e0) Reply frame received for 1\nI1022 19:51:15.710031    2944 log.go:172] (0xc00010c6e0) (0xc000718000) Create stream\nI1022 19:51:15.710045    2944 log.go:172] (0xc00010c6e0) (0xc000718000) Stream added, broadcasting: 3\nI1022 19:51:15.711261    2944 log.go:172] (0xc00010c6e0) Reply frame received for 3\nI1022 19:51:15.711306    2944 log.go:172] (0xc00010c6e0) (0xc0007e2000) Create stream\nI1022 19:51:15.711324    2944 log.go:172] (0xc00010c6e0) (0xc0007e2000) Stream added, broadcasting: 5\nI1022 19:51:15.712318    2944 log.go:172] (0xc00010c6e0) Reply frame received for 5\nI1022 19:51:15.808059    2944 log.go:172] (0xc00010c6e0) Data frame received for 5\nI1022 19:51:15.808085    2944 log.go:172] (0xc0007e2000) (5) Data frame handling\nI1022 19:51:15.808101    2944 log.go:172] (0xc0007e2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1022 19:51:15.860756    2944 log.go:172] (0xc00010c6e0) Data frame received for 3\nI1022 19:51:15.860789    2944 log.go:172] (0xc000718000) (3) Data frame handling\nI1022 19:51:15.860808    2944 log.go:172] (0xc000718000) (3) Data frame sent\nI1022 19:51:15.861183    2944 log.go:172] (0xc00010c6e0) Data frame received for 3\nI1022 19:51:15.861221    2944 log.go:172] (0xc000718000) (3) Data frame handling\nI1022 19:51:15.861686    2944 log.go:172] (0xc00010c6e0) Data frame received for 5\nI1022 19:51:15.861722    2944 log.go:172] (0xc0007e2000) (5) Data frame handling\nI1022 19:51:15.862895    2944 log.go:172] (0xc00010c6e0) Data frame received for 1\nI1022 19:51:15.862933    2944 log.go:172] (0xc0007b2780) (1) Data frame handling\nI1022 19:51:15.862971    2944 log.go:172] (0xc0007b2780) (1) Data frame sent\nI1022 19:51:15.862997    2944 log.go:172] (0xc00010c6e0) (0xc0007b2780) Stream removed, broadcasting: 1\nI1022 19:51:15.863053    2944 log.go:172] (0xc00010c6e0) Go away received\nI1022 19:51:15.863546    2944 log.go:172] (0xc00010c6e0) (0xc0007b2780) Stream removed, broadcasting: 1\nI1022 19:51:15.863568    2944 log.go:172] (0xc00010c6e0) (0xc000718000) Stream removed, broadcasting: 3\nI1022 19:51:15.863579    2944 log.go:172] (0xc00010c6e0) (0xc0007e2000) Stream removed, broadcasting: 5\n"
Oct 22 19:51:15.871: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Oct 22 19:51:15.871: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Oct 22 19:51:25.903: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Oct 22 19:51:35.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2935 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Oct 22 19:51:36.159: INFO: stderr: "I1022 19:51:36.063439    2976 log.go:172] (0xc000a1a6e0) (0xc000646aa0) Create stream\nI1022 19:51:36.063498    2976 log.go:172] (0xc000a1a6e0) (0xc000646aa0) Stream added, broadcasting: 1\nI1022 19:51:36.067013    2976 log.go:172] (0xc000a1a6e0) Reply frame received for 1\nI1022 19:51:36.067175    2976 log.go:172] (0xc000a1a6e0) (0xc000910000) Create stream\nI1022 19:51:36.067307    2976 log.go:172] (0xc000a1a6e0) (0xc000910000) Stream added, broadcasting: 3\nI1022 19:51:36.068373    2976 log.go:172] (0xc000a1a6e0) Reply frame received for 3\nI1022 19:51:36.068414    2976 log.go:172] (0xc000a1a6e0) (0xc0009100a0) Create stream\nI1022 19:51:36.068429    2976 log.go:172] (0xc000a1a6e0) (0xc0009100a0) Stream added, broadcasting: 5\nI1022 19:51:36.069423    2976 log.go:172] (0xc000a1a6e0) Reply frame received for 5\nI1022 19:51:36.145697    2976 log.go:172] (0xc000a1a6e0) Data frame received for 3\nI1022 19:51:36.145743    2976 log.go:172] (0xc000910000) (3) Data frame handling\nI1022 19:51:36.145772    2976 log.go:172] (0xc000910000) (3) Data frame sent\nI1022 19:51:36.145789    2976 log.go:172] (0xc000a1a6e0) Data frame received for 3\nI1022 19:51:36.145797    2976 log.go:172] (0xc000910000) (3) Data frame handling\nI1022 19:51:36.145988    2976 log.go:172] (0xc000a1a6e0) Data frame received for 5\nI1022 19:51:36.146020    2976 log.go:172] (0xc0009100a0) (5) Data frame handling\nI1022 19:51:36.146039    2976 log.go:172] (0xc0009100a0) (5) Data frame sent\nI1022 19:51:36.146053    2976 log.go:172] (0xc000a1a6e0) Data frame received for 5\nI1022 19:51:36.146065    2976 log.go:172] (0xc0009100a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1022 19:51:36.152457    2976 log.go:172] (0xc000a1a6e0) Data frame received for 1\nI1022 19:51:36.152482    2976 log.go:172] (0xc000646aa0) (1) Data frame handling\nI1022 19:51:36.152494    2976 log.go:172] (0xc000646aa0) (1) Data frame sent\nI1022 19:51:36.152516    2976 log.go:172] (0xc000a1a6e0) (0xc000646aa0) Stream removed, broadcasting: 1\nI1022 19:51:36.152805    2976 log.go:172] (0xc000a1a6e0) (0xc000646aa0) Stream removed, broadcasting: 1\nI1022 19:51:36.152825    2976 log.go:172] (0xc000a1a6e0) (0xc000910000) Stream removed, broadcasting: 3\nI1022 19:51:36.152845    2976 log.go:172] (0xc000a1a6e0) (0xc0009100a0) Stream removed, broadcasting: 5\n"
Oct 22 19:51:36.159: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Oct 22 19:51:36.159: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Oct 22 19:51:46.174: INFO: Waiting for StatefulSet statefulset-2935/ss2 to complete update
Oct 22 19:51:46.174: INFO: Waiting for Pod statefulset-2935/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Oct 22 19:51:46.175: INFO: Waiting for Pod statefulset-2935/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Oct 22 19:51:56.369: INFO: Waiting for StatefulSet statefulset-2935/ss2 to complete update
Oct 22 19:51:56.369: INFO: Waiting for Pod statefulset-2935/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Oct 22 19:52:06.194: INFO: Waiting for StatefulSet statefulset-2935/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Oct 22 19:52:16.182: INFO: Deleting all statefulset in ns statefulset-2935
Oct 22 19:52:16.185: INFO: Scaling statefulset ss2 to 0
Oct 22 19:52:46.209: INFO: Waiting for statefulset status.replicas updated to 0
Oct 22 19:52:46.213: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:52:46.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2935" for this suite.
Oct 22 19:52:52.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:52:52.342: INFO: namespace statefulset-2935 deletion completed in 6.118043965s

• [SLOW TEST:180.044 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:52:52.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-5ecc23c9-d638-44e1-8e8f-a8e376f81f1b
STEP: Creating a pod to test consume secrets
Oct 22 19:52:52.426: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fc54930f-75ad-423c-abe5-a075c221f6c2" in namespace "projected-6433" to be "success or failure"
Oct 22 19:52:52.469: INFO: Pod "pod-projected-secrets-fc54930f-75ad-423c-abe5-a075c221f6c2": Phase="Pending", Reason="", readiness=false. Elapsed: 43.471315ms
Oct 22 19:52:54.473: INFO: Pod "pod-projected-secrets-fc54930f-75ad-423c-abe5-a075c221f6c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047658321s
Oct 22 19:52:56.481: INFO: Pod "pod-projected-secrets-fc54930f-75ad-423c-abe5-a075c221f6c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055196159s
STEP: Saw pod success
Oct 22 19:52:56.481: INFO: Pod "pod-projected-secrets-fc54930f-75ad-423c-abe5-a075c221f6c2" satisfied condition "success or failure"
Oct 22 19:52:56.483: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-fc54930f-75ad-423c-abe5-a075c221f6c2 container secret-volume-test: 
STEP: delete the pod
Oct 22 19:52:56.518: INFO: Waiting for pod pod-projected-secrets-fc54930f-75ad-423c-abe5-a075c221f6c2 to disappear
Oct 22 19:52:56.533: INFO: Pod pod-projected-secrets-fc54930f-75ad-423c-abe5-a075c221f6c2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:52:56.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6433" for this suite.
Oct 22 19:53:02.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:53:02.638: INFO: namespace projected-6433 deletion completed in 6.102336081s

• [SLOW TEST:10.296 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:53:02.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Oct 22 19:53:06.801: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Oct 22 19:53:16.918: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:53:16.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3407" for this suite.
Oct 22 19:53:22.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:53:23.019: INFO: namespace pods-3407 deletion completed in 6.092606747s

• [SLOW TEST:20.381 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:53:23.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Oct 22 19:53:23.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-673'
Oct 22 19:53:23.234: INFO: stderr: ""
Oct 22 19:53:23.234: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Oct 22 19:53:28.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-673 -o json'
Oct 22 19:53:28.381: INFO: stderr: ""
Oct 22 19:53:28.381: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-10-22T19:53:23Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-673\",\n        \"resourceVersion\": \"5318221\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-673/pods/e2e-test-nginx-pod\",\n        \"uid\": \"a9c09b66-0224-4520-8e85-3bdbc986f365\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-z92bb\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-z92bb\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-z92bb\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-22T19:53:23Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-22T19:53:26Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-22T19:53:26Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-10-22T19:53:23Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://6f4b7b46b0b4947b9b84df3dcc6e7d7a1fe2c4592d6bb3ec2730507cab3f9478\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-10-22T19:53:25Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.5\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.7\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-10-22T19:53:23Z\"\n    }\n}\n"
STEP: replace the image in the pod
Oct 22 19:53:28.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-673'
Oct 22 19:53:28.650: INFO: stderr: ""
Oct 22 19:53:28.650: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Oct 22 19:53:28.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-673'
Oct 22 19:53:35.646: INFO: stderr: ""
Oct 22 19:53:35.646: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:53:35.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-673" for this suite.
Oct 22 19:53:41.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:53:41.743: INFO: namespace kubectl-673 deletion completed in 6.089133749s

• [SLOW TEST:18.723 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:53:41.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:53:41.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7397" for this suite.
Oct 22 19:54:03.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:54:03.905: INFO: namespace pods-7397 deletion completed in 22.088664983s

• [SLOW TEST:22.160 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:54:03.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Oct 22 19:54:03.941: INFO: Waiting up to 5m0s for pod "downward-api-68c27ca3-fb5d-4c91-be09-841f2798037c" in namespace "downward-api-6889" to be "success or failure"
Oct 22 19:54:03.964: INFO: Pod "downward-api-68c27ca3-fb5d-4c91-be09-841f2798037c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.086712ms
Oct 22 19:54:05.967: INFO: Pod "downward-api-68c27ca3-fb5d-4c91-be09-841f2798037c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02587202s
Oct 22 19:54:07.972: INFO: Pod "downward-api-68c27ca3-fb5d-4c91-be09-841f2798037c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03032991s
STEP: Saw pod success
Oct 22 19:54:07.972: INFO: Pod "downward-api-68c27ca3-fb5d-4c91-be09-841f2798037c" satisfied condition "success or failure"
Oct 22 19:54:07.975: INFO: Trying to get logs from node iruya-worker2 pod downward-api-68c27ca3-fb5d-4c91-be09-841f2798037c container dapi-container: 
STEP: delete the pod
Oct 22 19:54:08.014: INFO: Waiting for pod downward-api-68c27ca3-fb5d-4c91-be09-841f2798037c to disappear
Oct 22 19:54:08.017: INFO: Pod downward-api-68c27ca3-fb5d-4c91-be09-841f2798037c no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:54:08.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6889" for this suite.
Oct 22 19:54:14.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:54:14.125: INFO: namespace downward-api-6889 deletion completed in 6.104046463s

• [SLOW TEST:10.220 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:54:14.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Oct 22 19:54:14.194: INFO: Waiting up to 5m0s for pod "downward-api-753b3926-62ca-4c9e-a1db-fe46318e13bc" in namespace "downward-api-3364" to be "success or failure"
Oct 22 19:54:14.197: INFO: Pod "downward-api-753b3926-62ca-4c9e-a1db-fe46318e13bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.766643ms
Oct 22 19:54:16.201: INFO: Pod "downward-api-753b3926-62ca-4c9e-a1db-fe46318e13bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007131852s
Oct 22 19:54:18.205: INFO: Pod "downward-api-753b3926-62ca-4c9e-a1db-fe46318e13bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011103678s
STEP: Saw pod success
Oct 22 19:54:18.205: INFO: Pod "downward-api-753b3926-62ca-4c9e-a1db-fe46318e13bc" satisfied condition "success or failure"
Oct 22 19:54:18.208: INFO: Trying to get logs from node iruya-worker pod downward-api-753b3926-62ca-4c9e-a1db-fe46318e13bc container dapi-container: 
STEP: delete the pod
Oct 22 19:54:18.275: INFO: Waiting for pod downward-api-753b3926-62ca-4c9e-a1db-fe46318e13bc to disappear
Oct 22 19:54:18.287: INFO: Pod downward-api-753b3926-62ca-4c9e-a1db-fe46318e13bc no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:54:18.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3364" for this suite.
Oct 22 19:54:24.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:54:24.394: INFO: namespace downward-api-3364 deletion completed in 6.102272782s

• [SLOW TEST:10.268 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:54:24.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-a204e213-5e66-4945-bd6f-84766bbbb2c9
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:54:24.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2557" for this suite.
Oct 22 19:54:30.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:54:30.586: INFO: namespace configmap-2557 deletion completed in 6.116694717s

• [SLOW TEST:6.192 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:54:30.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5698
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 22 19:54:30.656: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Oct 22 19:54:56.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.225:8080/dial?request=hostName&protocol=udp&host=10.244.2.9&port=8081&tries=1'] Namespace:pod-network-test-5698 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:54:56.798: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:54:56.828703       6 log.go:172] (0xc000d68790) (0xc001adebe0) Create stream
I1022 19:54:56.828729       6 log.go:172] (0xc000d68790) (0xc001adebe0) Stream added, broadcasting: 1
I1022 19:54:56.830583       6 log.go:172] (0xc000d68790) Reply frame received for 1
I1022 19:54:56.830607       6 log.go:172] (0xc000d68790) (0xc000eee640) Create stream
I1022 19:54:56.830615       6 log.go:172] (0xc000d68790) (0xc000eee640) Stream added, broadcasting: 3
I1022 19:54:56.831505       6 log.go:172] (0xc000d68790) Reply frame received for 3
I1022 19:54:56.831548       6 log.go:172] (0xc000d68790) (0xc000eee6e0) Create stream
I1022 19:54:56.831562       6 log.go:172] (0xc000d68790) (0xc000eee6e0) Stream added, broadcasting: 5
I1022 19:54:56.832660       6 log.go:172] (0xc000d68790) Reply frame received for 5
I1022 19:54:56.903259       6 log.go:172] (0xc000d68790) Data frame received for 3
I1022 19:54:56.903283       6 log.go:172] (0xc000eee640) (3) Data frame handling
I1022 19:54:56.903295       6 log.go:172] (0xc000eee640) (3) Data frame sent
I1022 19:54:56.903916       6 log.go:172] (0xc000d68790) Data frame received for 3
I1022 19:54:56.903950       6 log.go:172] (0xc000eee640) (3) Data frame handling
I1022 19:54:56.903973       6 log.go:172] (0xc000d68790) Data frame received for 5
I1022 19:54:56.903990       6 log.go:172] (0xc000eee6e0) (5) Data frame handling
I1022 19:54:56.906269       6 log.go:172] (0xc000d68790) Data frame received for 1
I1022 19:54:56.906293       6 log.go:172] (0xc001adebe0) (1) Data frame handling
I1022 19:54:56.906301       6 log.go:172] (0xc001adebe0) (1) Data frame sent
I1022 19:54:56.906311       6 log.go:172] (0xc000d68790) (0xc001adebe0) Stream removed, broadcasting: 1
I1022 19:54:56.906345       6 log.go:172] (0xc000d68790) Go away received
I1022 19:54:56.906377       6 log.go:172] (0xc000d68790) (0xc001adebe0) Stream removed, broadcasting: 1
I1022 19:54:56.906386       6 log.go:172] (0xc000d68790) (0xc000eee640) Stream removed, broadcasting: 3
I1022 19:54:56.906442       6 log.go:172] Streams opened: 1, map[spdy.StreamId]*spdystream.Stream{0x5:(*spdystream.Stream)(0xc000eee6e0)}
I1022 19:54:56.906489       6 log.go:172] (0xc000d68790) (0xc000eee6e0) Stream removed, broadcasting: 5
Oct 22 19:54:56.906: INFO: Waiting for endpoints: map[]
Oct 22 19:54:56.910: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.225:8080/dial?request=hostName&protocol=udp&host=10.244.1.224&port=8081&tries=1'] Namespace:pod-network-test-5698 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 19:54:56.910: INFO: >>> kubeConfig: /root/.kube/config
I1022 19:54:56.940037       6 log.go:172] (0xc000a57a20) (0xc000b685a0) Create stream
I1022 19:54:56.940062       6 log.go:172] (0xc000a57a20) (0xc000b685a0) Stream added, broadcasting: 1
I1022 19:54:56.941974       6 log.go:172] (0xc000a57a20) Reply frame received for 1
I1022 19:54:56.942026       6 log.go:172] (0xc000a57a20) (0xc002521400) Create stream
I1022 19:54:56.942049       6 log.go:172] (0xc000a57a20) (0xc002521400) Stream added, broadcasting: 3
I1022 19:54:56.943189       6 log.go:172] (0xc000a57a20) Reply frame received for 3
I1022 19:54:56.943250       6 log.go:172] (0xc000a57a20) (0xc002a7e140) Create stream
I1022 19:54:56.943274       6 log.go:172] (0xc000a57a20) (0xc002a7e140) Stream added, broadcasting: 5
I1022 19:54:56.944191       6 log.go:172] (0xc000a57a20) Reply frame received for 5
I1022 19:54:57.015763       6 log.go:172] (0xc000a57a20) Data frame received for 3
I1022 19:54:57.015798       6 log.go:172] (0xc002521400) (3) Data frame handling
I1022 19:54:57.015826       6 log.go:172] (0xc002521400) (3) Data frame sent
I1022 19:54:57.016216       6 log.go:172] (0xc000a57a20) Data frame received for 3
I1022 19:54:57.016275       6 log.go:172] (0xc002521400) (3) Data frame handling
I1022 19:54:57.016309       6 log.go:172] (0xc000a57a20) Data frame received for 5
I1022 19:54:57.016333       6 log.go:172] (0xc002a7e140) (5) Data frame handling
I1022 19:54:57.018157       6 log.go:172] (0xc000a57a20) Data frame received for 1
I1022 19:54:57.018188       6 log.go:172] (0xc000b685a0) (1) Data frame handling
I1022 19:54:57.018214       6 log.go:172] (0xc000b685a0) (1) Data frame sent
I1022 19:54:57.018319       6 log.go:172] (0xc000a57a20) (0xc000b685a0) Stream removed, broadcasting: 1
I1022 19:54:57.018452       6 log.go:172] (0xc000a57a20) (0xc000b685a0) Stream removed, broadcasting: 1
I1022 19:54:57.018479       6 log.go:172] (0xc000a57a20) (0xc002521400) Stream removed, broadcasting: 3
I1022 19:54:57.018499       6 log.go:172] (0xc000a57a20) (0xc002a7e140) Stream removed, broadcasting: 5
Oct 22 19:54:57.018: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:54:57.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I1022 19:54:57.018990       6 log.go:172] (0xc000a57a20) Go away received
STEP: Destroying namespace "pod-network-test-5698" for this suite.
Oct 22 19:55:19.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:55:19.182: INFO: namespace pod-network-test-5698 deletion completed in 22.151673606s

• [SLOW TEST:48.596 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:55:19.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Oct 22 19:55:27.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:27.440: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:29.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:29.854: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:31.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:31.444: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:33.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:33.458: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:35.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:35.444: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:37.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:37.444: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:39.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:39.444: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:41.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:41.444: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:43.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:43.597: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:45.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:45.444: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:47.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:47.445: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:49.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:49.445: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:51.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:54.174: INFO: Pod pod-with-poststart-exec-hook still exists
Oct 22 19:55:55.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Oct 22 19:55:55.444: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:55:55.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-766" for this suite.
Oct 22 19:56:17.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:56:17.539: INFO: namespace container-lifecycle-hook-766 deletion completed in 22.090789312s

• [SLOW TEST:58.356 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:56:17.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-3619/configmap-test-bbad6862-277e-4786-a9b3-0e83d31c849f
STEP: Creating a pod to test consume configMaps
Oct 22 19:56:17.676: INFO: Waiting up to 5m0s for pod "pod-configmaps-a789adfe-e028-4a25-b45a-efe399da48bb" in namespace "configmap-3619" to be "success or failure"
Oct 22 19:56:17.694: INFO: Pod "pod-configmaps-a789adfe-e028-4a25-b45a-efe399da48bb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.350876ms
Oct 22 19:56:19.698: INFO: Pod "pod-configmaps-a789adfe-e028-4a25-b45a-efe399da48bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021684439s
Oct 22 19:56:21.702: INFO: Pod "pod-configmaps-a789adfe-e028-4a25-b45a-efe399da48bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026087547s
STEP: Saw pod success
Oct 22 19:56:21.702: INFO: Pod "pod-configmaps-a789adfe-e028-4a25-b45a-efe399da48bb" satisfied condition "success or failure"
Oct 22 19:56:21.705: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a789adfe-e028-4a25-b45a-efe399da48bb container env-test: 
STEP: delete the pod
Oct 22 19:56:21.724: INFO: Waiting for pod pod-configmaps-a789adfe-e028-4a25-b45a-efe399da48bb to disappear
Oct 22 19:56:21.728: INFO: Pod pod-configmaps-a789adfe-e028-4a25-b45a-efe399da48bb no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:56:21.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3619" for this suite.
Oct 22 19:56:27.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:56:27.845: INFO: namespace configmap-3619 deletion completed in 6.092760162s

• [SLOW TEST:10.307 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:56:27.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-98c87694-abe9-4e7f-a84a-32efa95b0b8d
STEP: Creating a pod to test consume configMaps
Oct 22 19:56:27.919: INFO: Waiting up to 5m0s for pod "pod-configmaps-c051bb21-e880-4422-9996-7d19b9e2f3bf" in namespace "configmap-8831" to be "success or failure"
Oct 22 19:56:27.921: INFO: Pod "pod-configmaps-c051bb21-e880-4422-9996-7d19b9e2f3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081097ms
Oct 22 19:56:29.925: INFO: Pod "pod-configmaps-c051bb21-e880-4422-9996-7d19b9e2f3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005739371s
Oct 22 19:56:31.929: INFO: Pod "pod-configmaps-c051bb21-e880-4422-9996-7d19b9e2f3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009955151s
Oct 22 19:56:33.935: INFO: Pod "pod-configmaps-c051bb21-e880-4422-9996-7d19b9e2f3bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015494343s
STEP: Saw pod success
Oct 22 19:56:33.935: INFO: Pod "pod-configmaps-c051bb21-e880-4422-9996-7d19b9e2f3bf" satisfied condition "success or failure"
Oct 22 19:56:33.938: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-c051bb21-e880-4422-9996-7d19b9e2f3bf container configmap-volume-test: 
STEP: delete the pod
Oct 22 19:56:33.982: INFO: Waiting for pod pod-configmaps-c051bb21-e880-4422-9996-7d19b9e2f3bf to disappear
Oct 22 19:56:33.989: INFO: Pod pod-configmaps-c051bb21-e880-4422-9996-7d19b9e2f3bf no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:56:33.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8831" for this suite.
Oct 22 19:56:40.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:56:40.081: INFO: namespace configmap-8831 deletion completed in 6.088442434s

• [SLOW TEST:12.235 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:56:40.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 19:56:40.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:56:44.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-275" for this suite.
Oct 22 19:57:22.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:57:22.395: INFO: namespace pods-275 deletion completed in 38.098826842s

• [SLOW TEST:42.313 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:57:22.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-bb2df03b-93fc-4388-991c-4a2eefc2b3e3
STEP: Creating a pod to test consume configMaps
Oct 22 19:57:22.474: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-28655532-6371-4c45-bbc3-c1f4dd7f5e33" in namespace "projected-7831" to be "success or failure"
Oct 22 19:57:22.478: INFO: Pod "pod-projected-configmaps-28655532-6371-4c45-bbc3-c1f4dd7f5e33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443969ms
Oct 22 19:57:24.482: INFO: Pod "pod-projected-configmaps-28655532-6371-4c45-bbc3-c1f4dd7f5e33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008471399s
Oct 22 19:57:26.627: INFO: Pod "pod-projected-configmaps-28655532-6371-4c45-bbc3-c1f4dd7f5e33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153328669s
STEP: Saw pod success
Oct 22 19:57:26.627: INFO: Pod "pod-projected-configmaps-28655532-6371-4c45-bbc3-c1f4dd7f5e33" satisfied condition "success or failure"
Oct 22 19:57:26.709: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-28655532-6371-4c45-bbc3-c1f4dd7f5e33 container projected-configmap-volume-test: 
STEP: delete the pod
Oct 22 19:57:26.775: INFO: Waiting for pod pod-projected-configmaps-28655532-6371-4c45-bbc3-c1f4dd7f5e33 to disappear
Oct 22 19:57:26.786: INFO: Pod pod-projected-configmaps-28655532-6371-4c45-bbc3-c1f4dd7f5e33 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:57:26.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7831" for this suite.
Oct 22 19:57:32.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:57:32.918: INFO: namespace projected-7831 deletion completed in 6.128301856s

• [SLOW TEST:10.523 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:57:32.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 19:57:32.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Oct 22 19:57:33.119: INFO: stderr: ""
Oct 22 19:57:33.119: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:31:02Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:57:33.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-829" for this suite.
Oct 22 19:57:39.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:57:39.215: INFO: namespace kubectl-829 deletion completed in 6.090553885s

• [SLOW TEST:6.297 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:57:39.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Oct 22 19:57:39.255: INFO: Waiting up to 5m0s for pod "downward-api-4ccc9862-4bed-4bdb-99a3-a335de89cfc4" in namespace "downward-api-1107" to be "success or failure"
Oct 22 19:57:39.272: INFO: Pod "downward-api-4ccc9862-4bed-4bdb-99a3-a335de89cfc4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.95449ms
Oct 22 19:57:41.277: INFO: Pod "downward-api-4ccc9862-4bed-4bdb-99a3-a335de89cfc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02195093s
Oct 22 19:57:43.281: INFO: Pod "downward-api-4ccc9862-4bed-4bdb-99a3-a335de89cfc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026268751s
STEP: Saw pod success
Oct 22 19:57:43.281: INFO: Pod "downward-api-4ccc9862-4bed-4bdb-99a3-a335de89cfc4" satisfied condition "success or failure"
Oct 22 19:57:43.284: INFO: Trying to get logs from node iruya-worker pod downward-api-4ccc9862-4bed-4bdb-99a3-a335de89cfc4 container dapi-container: 
STEP: delete the pod
Oct 22 19:57:43.315: INFO: Waiting for pod downward-api-4ccc9862-4bed-4bdb-99a3-a335de89cfc4 to disappear
Oct 22 19:57:43.325: INFO: Pod downward-api-4ccc9862-4bed-4bdb-99a3-a335de89cfc4 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:57:43.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1107" for this suite.
Oct 22 19:57:49.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:57:49.417: INFO: namespace downward-api-1107 deletion completed in 6.088342577s

• [SLOW TEST:10.201 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:57:49.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Oct 22 19:57:49.503: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7212" to be "success or failure"
Oct 22 19:57:49.506: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.483161ms
Oct 22 19:57:51.510: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006912397s
Oct 22 19:57:53.515: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011167026s
Oct 22 19:57:55.519: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015187094s
STEP: Saw pod success
Oct 22 19:57:55.519: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Oct 22 19:57:55.521: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Oct 22 19:57:55.539: INFO: Waiting for pod pod-host-path-test to disappear
Oct 22 19:57:55.550: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:57:55.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7212" for this suite.
Oct 22 19:58:01.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:58:01.656: INFO: namespace hostpath-7212 deletion completed in 6.103139955s

• [SLOW TEST:12.239 seconds]
[sig-storage] HostPath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:58:01.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 19:58:01.720: INFO: Waiting up to 5m0s for pod "downwardapi-volume-522bdd16-4faf-4b3b-bb00-571340efbfa9" in namespace "downward-api-8983" to be "success or failure"
Oct 22 19:58:01.724: INFO: Pod "downwardapi-volume-522bdd16-4faf-4b3b-bb00-571340efbfa9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.498145ms
Oct 22 19:58:03.728: INFO: Pod "downwardapi-volume-522bdd16-4faf-4b3b-bb00-571340efbfa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008118943s
Oct 22 19:58:05.732: INFO: Pod "downwardapi-volume-522bdd16-4faf-4b3b-bb00-571340efbfa9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012371228s
STEP: Saw pod success
Oct 22 19:58:05.732: INFO: Pod "downwardapi-volume-522bdd16-4faf-4b3b-bb00-571340efbfa9" satisfied condition "success or failure"
Oct 22 19:58:05.735: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-522bdd16-4faf-4b3b-bb00-571340efbfa9 container client-container: 
STEP: delete the pod
Oct 22 19:58:05.910: INFO: Waiting for pod downwardapi-volume-522bdd16-4faf-4b3b-bb00-571340efbfa9 to disappear
Oct 22 19:58:05.925: INFO: Pod downwardapi-volume-522bdd16-4faf-4b3b-bb00-571340efbfa9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:58:05.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8983" for this suite.
Oct 22 19:58:11.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:58:12.013: INFO: namespace downward-api-8983 deletion completed in 6.084980549s

• [SLOW TEST:10.356 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:58:12.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-6jrw
STEP: Creating a pod to test atomic-volume-subpath
Oct 22 19:58:12.129: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6jrw" in namespace "subpath-4468" to be "success or failure"
Oct 22 19:58:12.140: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.968632ms
Oct 22 19:58:14.144: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015059641s
Oct 22 19:58:16.149: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Running", Reason="", readiness=true. Elapsed: 4.019519656s
Oct 22 19:58:18.152: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Running", Reason="", readiness=true. Elapsed: 6.023304798s
Oct 22 19:58:20.157: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Running", Reason="", readiness=true. Elapsed: 8.027570448s
Oct 22 19:58:22.161: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Running", Reason="", readiness=true. Elapsed: 10.031630251s
Oct 22 19:58:24.165: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Running", Reason="", readiness=true. Elapsed: 12.0354572s
Oct 22 19:58:26.169: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Running", Reason="", readiness=true. Elapsed: 14.039981405s
Oct 22 19:58:28.174: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Running", Reason="", readiness=true. Elapsed: 16.04467264s
Oct 22 19:58:30.178: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Running", Reason="", readiness=true. Elapsed: 18.048815171s
Oct 22 19:58:32.182: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Running", Reason="", readiness=true. Elapsed: 20.053182432s
Oct 22 19:58:34.186: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Running", Reason="", readiness=true. Elapsed: 22.057281629s
Oct 22 19:58:36.191: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Running", Reason="", readiness=true. Elapsed: 24.061908815s
Oct 22 19:58:38.196: INFO: Pod "pod-subpath-test-projected-6jrw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.066860892s
STEP: Saw pod success
Oct 22 19:58:38.196: INFO: Pod "pod-subpath-test-projected-6jrw" satisfied condition "success or failure"
Oct 22 19:58:38.199: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-6jrw container test-container-subpath-projected-6jrw: 
STEP: delete the pod
Oct 22 19:58:38.223: INFO: Waiting for pod pod-subpath-test-projected-6jrw to disappear
Oct 22 19:58:38.241: INFO: Pod pod-subpath-test-projected-6jrw no longer exists
STEP: Deleting pod pod-subpath-test-projected-6jrw
Oct 22 19:58:38.241: INFO: Deleting pod "pod-subpath-test-projected-6jrw" in namespace "subpath-4468"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:58:38.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4468" for this suite.
Oct 22 19:58:44.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:58:44.339: INFO: namespace subpath-4468 deletion completed in 6.092644114s

• [SLOW TEST:32.326 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:58:44.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7774.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7774.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7774.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 22 19:58:52.458: INFO: DNS probes using dns-test-3e665434-afce-4bcb-9e3f-d4c00311a58d succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7774.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7774.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7774.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 22 19:59:00.782: INFO: File wheezy_udp@dns-test-service-3.dns-7774.svc.cluster.local from pod  dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 22 19:59:00.785: INFO: File jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local from pod  dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 22 19:59:00.785: INFO: Lookups using dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f failed for: [wheezy_udp@dns-test-service-3.dns-7774.svc.cluster.local jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local]

Oct 22 19:59:05.790: INFO: File wheezy_udp@dns-test-service-3.dns-7774.svc.cluster.local from pod  dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 22 19:59:05.793: INFO: File jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local from pod  dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 22 19:59:05.793: INFO: Lookups using dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f failed for: [wheezy_udp@dns-test-service-3.dns-7774.svc.cluster.local jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local]

Oct 22 19:59:10.791: INFO: File wheezy_udp@dns-test-service-3.dns-7774.svc.cluster.local from pod  dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 22 19:59:10.796: INFO: File jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local from pod  dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 22 19:59:10.796: INFO: Lookups using dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f failed for: [wheezy_udp@dns-test-service-3.dns-7774.svc.cluster.local jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local]

Oct 22 19:59:15.791: INFO: File wheezy_udp@dns-test-service-3.dns-7774.svc.cluster.local from pod  dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 22 19:59:15.795: INFO: File jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local from pod  dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 22 19:59:15.795: INFO: Lookups using dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f failed for: [wheezy_udp@dns-test-service-3.dns-7774.svc.cluster.local jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local]

Oct 22 19:59:20.795: INFO: File jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local from pod  dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f contains 'foo.example.com.
' instead of 'bar.example.com.'
Oct 22 19:59:20.795: INFO: Lookups using dns-7774/dns-test-502c7742-527b-4d44-aaa4-047e98009b5f failed for: [jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local]

Oct 22 19:59:25.793: INFO: DNS probes using dns-test-502c7742-527b-4d44-aaa4-047e98009b5f succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7774.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7774.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7774.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7774.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 22 19:59:34.301: INFO: DNS probes using dns-test-5b3c0691-8d1e-4679-95e7-144b03b048ab succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:59:34.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7774" for this suite.
Oct 22 19:59:40.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:59:40.497: INFO: namespace dns-7774 deletion completed in 6.118112651s

• [SLOW TEST:56.158 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:59:40.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-079862e1-97aa-4544-8cc4-53b14a35ad2d
STEP: Creating a pod to test consume configMaps
Oct 22 19:59:40.598: INFO: Waiting up to 5m0s for pod "pod-configmaps-aea575d7-1a14-4a08-9010-01087b1f5011" in namespace "configmap-2043" to be "success or failure"
Oct 22 19:59:40.600: INFO: Pod "pod-configmaps-aea575d7-1a14-4a08-9010-01087b1f5011": Phase="Pending", Reason="", readiness=false. Elapsed: 2.793969ms
Oct 22 19:59:42.819: INFO: Pod "pod-configmaps-aea575d7-1a14-4a08-9010-01087b1f5011": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221526788s
Oct 22 19:59:44.825: INFO: Pod "pod-configmaps-aea575d7-1a14-4a08-9010-01087b1f5011": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227103466s
Oct 22 19:59:46.828: INFO: Pod "pod-configmaps-aea575d7-1a14-4a08-9010-01087b1f5011": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.230765151s
STEP: Saw pod success
Oct 22 19:59:46.828: INFO: Pod "pod-configmaps-aea575d7-1a14-4a08-9010-01087b1f5011" satisfied condition "success or failure"
Oct 22 19:59:46.831: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-aea575d7-1a14-4a08-9010-01087b1f5011 container configmap-volume-test: 
STEP: delete the pod
Oct 22 19:59:46.872: INFO: Waiting for pod pod-configmaps-aea575d7-1a14-4a08-9010-01087b1f5011 to disappear
Oct 22 19:59:46.884: INFO: Pod pod-configmaps-aea575d7-1a14-4a08-9010-01087b1f5011 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:59:46.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2043" for this suite.
Oct 22 19:59:52.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 19:59:52.981: INFO: namespace configmap-2043 deletion completed in 6.092639032s

• [SLOW TEST:12.483 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 19:59:52.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-7941fd8a-d72f-4706-b888-cc80dc8c8898
STEP: Creating a pod to test consume configMaps
Oct 22 19:59:53.060: INFO: Waiting up to 5m0s for pod "pod-configmaps-d60b4c85-9050-4ae6-84f0-2f7949a46b69" in namespace "configmap-8697" to be "success or failure"
Oct 22 19:59:53.076: INFO: Pod "pod-configmaps-d60b4c85-9050-4ae6-84f0-2f7949a46b69": Phase="Pending", Reason="", readiness=false. Elapsed: 15.960498ms
Oct 22 19:59:55.113: INFO: Pod "pod-configmaps-d60b4c85-9050-4ae6-84f0-2f7949a46b69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052419635s
Oct 22 19:59:57.117: INFO: Pod "pod-configmaps-d60b4c85-9050-4ae6-84f0-2f7949a46b69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056898158s
Oct 22 19:59:59.121: INFO: Pod "pod-configmaps-d60b4c85-9050-4ae6-84f0-2f7949a46b69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061048694s
STEP: Saw pod success
Oct 22 19:59:59.121: INFO: Pod "pod-configmaps-d60b4c85-9050-4ae6-84f0-2f7949a46b69" satisfied condition "success or failure"
Oct 22 19:59:59.124: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d60b4c85-9050-4ae6-84f0-2f7949a46b69 container configmap-volume-test: 
STEP: delete the pod
Oct 22 19:59:59.156: INFO: Waiting for pod pod-configmaps-d60b4c85-9050-4ae6-84f0-2f7949a46b69 to disappear
Oct 22 19:59:59.184: INFO: Pod pod-configmaps-d60b4c85-9050-4ae6-84f0-2f7949a46b69 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 19:59:59.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8697" for this suite.
Oct 22 20:00:05.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:00:05.282: INFO: namespace configmap-8697 deletion completed in 6.095141403s

• [SLOW TEST:12.301 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:00:05.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Oct 22 20:00:05.358: INFO: Waiting up to 5m0s for pod "pod-286e8bf8-4f89-4f2c-b14e-fe51589e7820" in namespace "emptydir-2088" to be "success or failure"
Oct 22 20:00:05.362: INFO: Pod "pod-286e8bf8-4f89-4f2c-b14e-fe51589e7820": Phase="Pending", Reason="", readiness=false. Elapsed: 3.710249ms
Oct 22 20:00:07.366: INFO: Pod "pod-286e8bf8-4f89-4f2c-b14e-fe51589e7820": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007603644s
Oct 22 20:00:09.370: INFO: Pod "pod-286e8bf8-4f89-4f2c-b14e-fe51589e7820": Phase="Running", Reason="", readiness=true. Elapsed: 4.011888409s
Oct 22 20:00:11.374: INFO: Pod "pod-286e8bf8-4f89-4f2c-b14e-fe51589e7820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015935601s
STEP: Saw pod success
Oct 22 20:00:11.374: INFO: Pod "pod-286e8bf8-4f89-4f2c-b14e-fe51589e7820" satisfied condition "success or failure"
Oct 22 20:00:11.377: INFO: Trying to get logs from node iruya-worker2 pod pod-286e8bf8-4f89-4f2c-b14e-fe51589e7820 container test-container: 
STEP: delete the pod
Oct 22 20:00:11.399: INFO: Waiting for pod pod-286e8bf8-4f89-4f2c-b14e-fe51589e7820 to disappear
Oct 22 20:00:11.409: INFO: Pod pod-286e8bf8-4f89-4f2c-b14e-fe51589e7820 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:00:11.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2088" for this suite.
Oct 22 20:00:17.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:00:17.511: INFO: namespace emptydir-2088 deletion completed in 6.098748903s

• [SLOW TEST:12.229 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:00:17.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Oct 22 20:00:17.562: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:00:22.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6838" for this suite.
Oct 22 20:00:28.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:00:29.051: INFO: namespace init-container-6838 deletion completed in 6.087644555s

• [SLOW TEST:11.540 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:00:29.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct 22 20:00:33.335: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:00:33.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9046" for this suite.
Oct 22 20:00:39.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:00:39.466: INFO: namespace container-runtime-9046 deletion completed in 6.099224361s

• [SLOW TEST:10.414 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:00:39.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-61df5c3b-54ae-4351-ad6d-cac85fb8bfd3
STEP: Creating a pod to test consume configMaps
Oct 22 20:00:39.600: INFO: Waiting up to 5m0s for pod "pod-configmaps-49e5bb4d-5890-4bac-b0cc-40cfe1b887ed" in namespace "configmap-2360" to be "success or failure"
Oct 22 20:00:39.604: INFO: Pod "pod-configmaps-49e5bb4d-5890-4bac-b0cc-40cfe1b887ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.509486ms
Oct 22 20:00:41.610: INFO: Pod "pod-configmaps-49e5bb4d-5890-4bac-b0cc-40cfe1b887ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009261538s
Oct 22 20:00:43.628: INFO: Pod "pod-configmaps-49e5bb4d-5890-4bac-b0cc-40cfe1b887ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027666464s
STEP: Saw pod success
Oct 22 20:00:43.628: INFO: Pod "pod-configmaps-49e5bb4d-5890-4bac-b0cc-40cfe1b887ed" satisfied condition "success or failure"
Oct 22 20:00:43.631: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-49e5bb4d-5890-4bac-b0cc-40cfe1b887ed container configmap-volume-test: 
STEP: delete the pod
Oct 22 20:00:43.665: INFO: Waiting for pod pod-configmaps-49e5bb4d-5890-4bac-b0cc-40cfe1b887ed to disappear
Oct 22 20:00:43.674: INFO: Pod pod-configmaps-49e5bb4d-5890-4bac-b0cc-40cfe1b887ed no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:00:43.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2360" for this suite.
Oct 22 20:00:49.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:00:49.758: INFO: namespace configmap-2360 deletion completed in 6.080846454s

• [SLOW TEST:10.292 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:00:49.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 20:00:49.783: INFO: Creating deployment "nginx-deployment"
Oct 22 20:00:49.799: INFO: Waiting for observed generation 1
Oct 22 20:00:51.825: INFO: Waiting for all required pods to come up
Oct 22 20:00:51.828: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Oct 22 20:01:06.047: INFO: Waiting for deployment "nginx-deployment" to complete
Oct 22 20:01:06.053: INFO: Updating deployment "nginx-deployment" with a non-existent image
Oct 22 20:01:06.060: INFO: Updating deployment nginx-deployment
Oct 22 20:01:06.060: INFO: Waiting for observed generation 2
Oct 22 20:01:08.454: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Oct 22 20:01:08.486: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Oct 22 20:01:08.488: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Oct 22 20:01:08.601: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Oct 22 20:01:08.601: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Oct 22 20:01:08.603: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Oct 22 20:01:08.607: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Oct 22 20:01:08.607: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Oct 22 20:01:08.611: INFO: Updating deployment nginx-deployment
Oct 22 20:01:08.611: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Oct 22 20:01:08.641: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Oct 22 20:01:09.069: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Oct 22 20:01:09.518: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-7959,SelfLink:/apis/apps/v1/namespaces/deployment-7959/deployments/nginx-deployment,UID:e5328592-56d4-4237-97e4-4d8f5ce91a91,ResourceVersion:5319998,Generation:3,CreationTimestamp:2020-10-22 20:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-10-22 20:01:08 +0000 UTC 2020-10-22 20:00:49 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-10-22 20:01:08 +0000 UTC 2020-10-22 20:01:08 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Oct 22 20:01:09.772: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-7959,SelfLink:/apis/apps/v1/namespaces/deployment-7959/replicasets/nginx-deployment-55fb7cb77f,UID:63785abf-9d87-46ca-af6b-2489e7c037f4,ResourceVersion:5320017,Generation:3,CreationTimestamp:2020-10-22 20:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e5328592-56d4-4237-97e4-4d8f5ce91a91 0xc00320f1f7 0xc00320f1f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Oct 22 20:01:09.772: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Oct 22 20:01:09.773: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-7959,SelfLink:/apis/apps/v1/namespaces/deployment-7959/replicasets/nginx-deployment-7b8c6f4498,UID:237b6fbf-0717-40f1-8188-6417310cb97f,ResourceVersion:5320012,Generation:3,CreationTimestamp:2020-10-22 20:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e5328592-56d4-4237-97e4-4d8f5ce91a91 0xc00320f2c7 0xc00320f2c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Oct 22 20:01:09.845: INFO: Pod "nginx-deployment-55fb7cb77f-52rt4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-52rt4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-52rt4,UID:9e6945f6-6084-43e7-8c7e-279c6c136496,ResourceVersion:5319949,Generation:0,CreationTimestamp:2020-10-22 20:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357acc7 0xc00357acc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357ad40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357ad60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-10-22 20:01:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.845: INFO: Pod "nginx-deployment-55fb7cb77f-68wzg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-68wzg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-68wzg,UID:fee7efc5-55a7-46de-9a33-deba1ffcb15d,ResourceVersion:5319991,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357ae30 0xc00357ae31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357aeb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357aed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.845: INFO: Pod "nginx-deployment-55fb7cb77f-6j65l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6j65l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-6j65l,UID:2fe55b62-20cb-4be3-b583-7d4ad997e8dd,ResourceVersion:5320011,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357af57 0xc00357af58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357afd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357aff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.845: INFO: Pod "nginx-deployment-55fb7cb77f-bt8l6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bt8l6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-bt8l6,UID:19cb51c1-854e-4c75-83ce-9760e79dde27,ResourceVersion:5320003,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357b077 0xc00357b078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357b0f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357b110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.845: INFO: Pod "nginx-deployment-55fb7cb77f-cf2jj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cf2jj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-cf2jj,UID:bf8e7fa1-709b-4d46-a896-32f82b845358,ResourceVersion:5319957,Generation:0,CreationTimestamp:2020-10-22 20:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357b197 0xc00357b198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357b210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357b230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-10-22 20:01:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.845: INFO: Pod "nginx-deployment-55fb7cb77f-dtd8l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dtd8l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-dtd8l,UID:d2ee69ca-30fa-4129-b16b-420c387ebc17,ResourceVersion:5319924,Generation:0,CreationTimestamp:2020-10-22 20:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357b300 0xc00357b301}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357b380} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357b3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-10-22 20:01:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.845: INFO: Pod "nginx-deployment-55fb7cb77f-hj7g5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hj7g5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-hj7g5,UID:dd43c149-ad8b-4621-bd8e-7b4e5570b1af,ResourceVersion:5319928,Generation:0,CreationTimestamp:2020-10-22 20:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357b470 0xc00357b471}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357b4f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357b510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-10-22 20:01:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.845: INFO: Pod "nginx-deployment-55fb7cb77f-hps7l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hps7l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-hps7l,UID:77400590-4484-493b-86f5-1342cd94c9aa,ResourceVersion:5319999,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357b5e0 0xc00357b5e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357b660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357b680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.845: INFO: Pod "nginx-deployment-55fb7cb77f-j5vtw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j5vtw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-j5vtw,UID:85614fb7-b483-4872-a070-69883b667ba6,ResourceVersion:5319986,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357b707 0xc00357b708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357b780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357b7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.846: INFO: Pod "nginx-deployment-55fb7cb77f-q2nq7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q2nq7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-q2nq7,UID:ec91c6dd-d13b-49bf-9892-72f724fb8f73,ResourceVersion:5320006,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357b827 0xc00357b828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357b8a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357b8c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.846: INFO: Pod "nginx-deployment-55fb7cb77f-rmwd8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rmwd8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-rmwd8,UID:ed47463d-a6f6-4a3f-a893-2c58101badba,ResourceVersion:5319979,Generation:0,CreationTimestamp:2020-10-22 20:01:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357b947 0xc00357b948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357b9c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357b9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.846: INFO: Pod "nginx-deployment-55fb7cb77f-rq8vc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rq8vc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-rq8vc,UID:cc9f30ec-ab83-4eee-a584-c1cb55bd4ac6,ResourceVersion:5319940,Generation:0,CreationTimestamp:2020-10-22 20:01:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357ba67 0xc00357ba68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357bae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357bb00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:06 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-10-22 20:01:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.846: INFO: Pod "nginx-deployment-55fb7cb77f-x2nnf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-x2nnf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-55fb7cb77f-x2nnf,UID:144fcbd3-6d19-430e-ad82-965878b95f79,ResourceVersion:5320000,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 63785abf-9d87-46ca-af6b-2489e7c037f4 0xc00357bbd0 0xc00357bbd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357bc50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357bc70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.846: INFO: Pod "nginx-deployment-7b8c6f4498-2g6qc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2g6qc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-2g6qc,UID:95c4f666-56e8-411f-ab28-7ebcea3b813d,ResourceVersion:5320013,Generation:0,CreationTimestamp:2020-10-22 20:01:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00357bcf7 0xc00357bcf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357bd70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357bd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:08 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-10-22 20:01:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.846: INFO: Pod "nginx-deployment-7b8c6f4498-524vv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-524vv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-524vv,UID:9b57ee74-f6f0-41a8-811d-1181e40795e4,ResourceVersion:5319895,Generation:0,CreationTimestamp:2020-10-22 20:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00357be57 0xc00357be58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00357bed0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00357bef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:50 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.238,StartTime:2020-10-22 20:00:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-10-22 20:01:04 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://faf5690f65f27a8e62f361c71e753f0a43eb6a281d7826551e401ae437cecff2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.846: INFO: Pod "nginx-deployment-7b8c6f4498-5522n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5522n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-5522n,UID:a9461a59-11ac-4e7b-b8d0-d3d96f731206,ResourceVersion:5320008,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00357bfc7 0xc00357bfc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351c040} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351c060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.847: INFO: Pod "nginx-deployment-7b8c6f4498-7ffqc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7ffqc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-7ffqc,UID:d1221143-12a1-408b-9f22-5abd4ce1d56f,ResourceVersion:5319834,Generation:0,CreationTimestamp:2020-10-22 20:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351c0e7 0xc00351c0e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351c160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351c180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:49 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.21,StartTime:2020-10-22 20:00:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-10-22 20:00:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b66e318cfba7a4da650bc38ea43d2eb163beb5f3497c6e188c97729daa81d222}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.847: INFO: Pod "nginx-deployment-7b8c6f4498-866kl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-866kl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-866kl,UID:9caa67b9-fd0f-466f-a764-e1fe3fcff2c8,ResourceVersion:5319866,Generation:0,CreationTimestamp:2020-10-22 20:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351c257 0xc00351c258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351c2d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351c2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:49 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.235,StartTime:2020-10-22 20:00:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-10-22 20:00:57 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://87bfcdc8816b0fe92dd150b4c45cf33b02c9899197dc8f96a0d081196f685a5e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.847: INFO: Pod "nginx-deployment-7b8c6f4498-87g9h" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-87g9h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-87g9h,UID:e3bbb3fc-cc8a-46a0-a2ba-fd0785f3fedd,ResourceVersion:5319892,Generation:0,CreationTimestamp:2020-10-22 20:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351c3c7 0xc00351c3c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351c440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351c460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:49 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.23,StartTime:2020-10-22 20:00:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-10-22 20:01:03 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c371eabb101c7f9f73f6ac5639dd0f093c56b3206b5ed7b20ead81fcd600a628}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.847: INFO: Pod "nginx-deployment-7b8c6f4498-9c4nq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9c4nq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-9c4nq,UID:7d643074-107a-4ad4-a8f1-6088caef954a,ResourceVersion:5320005,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351c537 0xc00351c538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351c5b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351c5d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.847: INFO: Pod "nginx-deployment-7b8c6f4498-fwpxr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fwpxr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-fwpxr,UID:7ddcf5c8-fd4d-4b8c-a19f-395d2b3a706e,ResourceVersion:5319872,Generation:0,CreationTimestamp:2020-10-22 20:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351c657 0xc00351c658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351c6d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351c6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:49 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.236,StartTime:2020-10-22 20:00:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-10-22 20:01:02 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3c02ade7ac143e959e1330c2cbde070d61ea7f6526cabae973ec8fd2c9916c30}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.847: INFO: Pod "nginx-deployment-7b8c6f4498-h4br9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h4br9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-h4br9,UID:39121ad3-3c0e-4af5-95c3-2218f79becf4,ResourceVersion:5320018,Generation:0,CreationTimestamp:2020-10-22 20:01:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351c7d7 0xc00351c7d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351c850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351c870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-10-22 20:01:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.847: INFO: Pod "nginx-deployment-7b8c6f4498-jhk2b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jhk2b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-jhk2b,UID:5d80f091-f71e-4bbd-94dc-0c8909fd185d,ResourceVersion:5320024,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351c937 0xc00351c938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351c9b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351c9d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-10-22 20:01:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.847: INFO: Pod "nginx-deployment-7b8c6f4498-jw92v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jw92v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-jw92v,UID:b55c39a7-daea-493b-ae8d-07b745ea4fb6,ResourceVersion:5320009,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351ca97 0xc00351ca98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351cb10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351cb30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.847: INFO: Pod "nginx-deployment-7b8c6f4498-lqh4c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lqh4c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-lqh4c,UID:fd7b9a50-0742-4e15-9cf5-b8784dbcb91e,ResourceVersion:5319842,Generation:0,CreationTimestamp:2020-10-22 20:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351cbb7 0xc00351cbb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351cc30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351cc50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:49 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.234,StartTime:2020-10-22 20:00:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-10-22 20:00:54 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ef3b3c3b229692ac08bcdc433f5cf4068668dc9625bbf31141d48c52d048eb07}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.848: INFO: Pod "nginx-deployment-7b8c6f4498-mp4cz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mp4cz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-mp4cz,UID:7eec0056-8a59-46f3-b120-287b219c2d7a,ResourceVersion:5320004,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351cd27 0xc00351cd28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351cda0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351cdc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.848: INFO: Pod "nginx-deployment-7b8c6f4498-q2bw5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q2bw5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-q2bw5,UID:d5006a23-b2ef-4828-b27e-fa652a9f766a,ResourceVersion:5319975,Generation:0,CreationTimestamp:2020-10-22 20:01:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351ce47 0xc00351ce48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351cec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351cee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.848: INFO: Pod "nginx-deployment-7b8c6f4498-r9slp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r9slp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-r9slp,UID:3d5607d7-8ffa-4359-96a7-9589a3ba9478,ResourceVersion:5320007,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351cf67 0xc00351cf68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351cfe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351d000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.848: INFO: Pod "nginx-deployment-7b8c6f4498-rbq7v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rbq7v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-rbq7v,UID:7ae2924b-b91c-4cef-a563-eb93c0a073a7,ResourceVersion:5319985,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351d087 0xc00351d088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351d100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351d120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.848: INFO: Pod "nginx-deployment-7b8c6f4498-s8vqg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s8vqg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-s8vqg,UID:d28149da-d029-4ec1-9a6c-2562e2da2af6,ResourceVersion:5319990,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351d1a7 0xc00351d1a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351d220} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351d240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.848: INFO: Pod "nginx-deployment-7b8c6f4498-sml9v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sml9v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-sml9v,UID:cf323139-34df-44a8-bdc0-369852fda95e,ResourceVersion:5319855,Generation:0,CreationTimestamp:2020-10-22 20:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351d2c7 0xc00351d2c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351d340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351d360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:49 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.22,StartTime:2020-10-22 20:00:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-10-22 20:00:57 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b382be8c984ebf3cc4a96d3c1e42023143561b51b918f2e503867197cc42df9a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.848: INFO: Pod "nginx-deployment-7b8c6f4498-tt2pq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tt2pq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-tt2pq,UID:475be967-04c3-4df5-a1ff-bc8da572eb7b,ResourceVersion:5319888,Generation:0,CreationTimestamp:2020-10-22 20:00:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351d437 0xc00351d438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351d4b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351d4d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:00:50 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.25,StartTime:2020-10-22 20:00:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-10-22 20:01:04 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e35a115fb8c628b0dc6e0cd0b9bffacff1d31a9ae62c537f6ee31a710c46515b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Oct 22 20:01:09.848: INFO: Pod "nginx-deployment-7b8c6f4498-x9pj9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x9pj9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7959,SelfLink:/api/v1/namespaces/deployment-7959/pods/nginx-deployment-7b8c6f4498-x9pj9,UID:866bcf55-9fec-4149-8ad1-658bbf4ebdd3,ResourceVersion:5319989,Generation:0,CreationTimestamp:2020-10-22 20:01:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 237b6fbf-0717-40f1-8188-6417310cb97f 0xc00351d5a7 0xc00351d5a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jtvlq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jtvlq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jtvlq true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00351d620} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00351d640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:01:09 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:01:09.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7959" for this suite.
Oct 22 20:01:28.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:01:28.136: INFO: namespace deployment-7959 deletion completed in 18.237054151s

• [SLOW TEST:38.377 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:01:28.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Oct 22 20:01:28.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Oct 22 20:01:30.042: INFO: stderr: ""
Oct 22 20:01:30.042: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:01:30.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2846" for this suite.
Oct 22 20:01:36.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:01:36.190: INFO: namespace kubectl-2846 deletion completed in 6.143267944s

• [SLOW TEST:8.054 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:01:36.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 20:01:36.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a75b33cc-1b73-404d-aa9e-439b9e45a6bb" in namespace "downward-api-3095" to be "success or failure"
Oct 22 20:01:36.497: INFO: Pod "downwardapi-volume-a75b33cc-1b73-404d-aa9e-439b9e45a6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.116817ms
Oct 22 20:01:38.501: INFO: Pod "downwardapi-volume-a75b33cc-1b73-404d-aa9e-439b9e45a6bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036684421s
Oct 22 20:01:40.505: INFO: Pod "downwardapi-volume-a75b33cc-1b73-404d-aa9e-439b9e45a6bb": Phase="Running", Reason="", readiness=true. Elapsed: 4.040915065s
Oct 22 20:01:42.509: INFO: Pod "downwardapi-volume-a75b33cc-1b73-404d-aa9e-439b9e45a6bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045041035s
STEP: Saw pod success
Oct 22 20:01:42.509: INFO: Pod "downwardapi-volume-a75b33cc-1b73-404d-aa9e-439b9e45a6bb" satisfied condition "success or failure"
Oct 22 20:01:42.512: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a75b33cc-1b73-404d-aa9e-439b9e45a6bb container client-container: 
STEP: delete the pod
Oct 22 20:01:42.552: INFO: Waiting for pod downwardapi-volume-a75b33cc-1b73-404d-aa9e-439b9e45a6bb to disappear
Oct 22 20:01:42.562: INFO: Pod downwardapi-volume-a75b33cc-1b73-404d-aa9e-439b9e45a6bb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:01:42.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3095" for this suite.
Oct 22 20:01:48.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:01:48.654: INFO: namespace downward-api-3095 deletion completed in 6.087663575s

• [SLOW TEST:12.464 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:01:48.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-2c07a7ff-0d0e-4adf-ac60-3e4253add548
STEP: Creating a pod to test consume configMaps
Oct 22 20:01:48.702: INFO: Waiting up to 5m0s for pod "pod-configmaps-6baa47cb-3560-4ae8-ae06-08a703eef7a0" in namespace "configmap-6585" to be "success or failure"
Oct 22 20:01:48.730: INFO: Pod "pod-configmaps-6baa47cb-3560-4ae8-ae06-08a703eef7a0": Phase="Pending", Reason="", readiness=false. Elapsed: 27.27274ms
Oct 22 20:01:50.934: INFO: Pod "pod-configmaps-6baa47cb-3560-4ae8-ae06-08a703eef7a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231404588s
Oct 22 20:01:52.938: INFO: Pod "pod-configmaps-6baa47cb-3560-4ae8-ae06-08a703eef7a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.235783216s
STEP: Saw pod success
Oct 22 20:01:52.938: INFO: Pod "pod-configmaps-6baa47cb-3560-4ae8-ae06-08a703eef7a0" satisfied condition "success or failure"
Oct 22 20:01:52.941: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6baa47cb-3560-4ae8-ae06-08a703eef7a0 container configmap-volume-test: 
STEP: delete the pod
Oct 22 20:01:52.970: INFO: Waiting for pod pod-configmaps-6baa47cb-3560-4ae8-ae06-08a703eef7a0 to disappear
Oct 22 20:01:52.982: INFO: Pod pod-configmaps-6baa47cb-3560-4ae8-ae06-08a703eef7a0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:01:52.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6585" for this suite.
Oct 22 20:01:58.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:01:59.073: INFO: namespace configmap-6585 deletion completed in 6.089229886s

• [SLOW TEST:10.419 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:01:59.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 20:01:59.104: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:02:00.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4293" for this suite.
Oct 22 20:02:06.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:02:06.300: INFO: namespace custom-resource-definition-4293 deletion completed in 6.085431049s

• [SLOW TEST:7.226 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:02:06.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Oct 22 20:02:06.384: INFO: Waiting up to 5m0s for pod "pod-67e01ca6-365b-462b-8741-79fa94b9b5df" in namespace "emptydir-6391" to be "success or failure"
Oct 22 20:02:06.391: INFO: Pod "pod-67e01ca6-365b-462b-8741-79fa94b9b5df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.398528ms
Oct 22 20:02:08.455: INFO: Pod "pod-67e01ca6-365b-462b-8741-79fa94b9b5df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070553508s
Oct 22 20:02:10.459: INFO: Pod "pod-67e01ca6-365b-462b-8741-79fa94b9b5df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074537371s
STEP: Saw pod success
Oct 22 20:02:10.459: INFO: Pod "pod-67e01ca6-365b-462b-8741-79fa94b9b5df" satisfied condition "success or failure"
Oct 22 20:02:10.461: INFO: Trying to get logs from node iruya-worker pod pod-67e01ca6-365b-462b-8741-79fa94b9b5df container test-container: 
STEP: delete the pod
Oct 22 20:02:10.482: INFO: Waiting for pod pod-67e01ca6-365b-462b-8741-79fa94b9b5df to disappear
Oct 22 20:02:10.487: INFO: Pod pod-67e01ca6-365b-462b-8741-79fa94b9b5df no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:02:10.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6391" for this suite.
Oct 22 20:02:16.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:02:16.617: INFO: namespace emptydir-6391 deletion completed in 6.126632916s

• [SLOW TEST:10.317 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:02:16.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Oct 22 20:02:16.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9626 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Oct 22 20:02:23.897: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI1022 20:02:23.809656    3135 log.go:172] (0xc000d8e420) (0xc000710280) Create stream\nI1022 20:02:23.809686    3135 log.go:172] (0xc000d8e420) (0xc000710280) Stream added, broadcasting: 1\nI1022 20:02:23.811436    3135 log.go:172] (0xc000d8e420) Reply frame received for 1\nI1022 20:02:23.811483    3135 log.go:172] (0xc000d8e420) (0xc000690c80) Create stream\nI1022 20:02:23.811503    3135 log.go:172] (0xc000d8e420) (0xc000690c80) Stream added, broadcasting: 3\nI1022 20:02:23.812746    3135 log.go:172] (0xc000d8e420) Reply frame received for 3\nI1022 20:02:23.812814    3135 log.go:172] (0xc000d8e420) (0xc0008d06e0) Create stream\nI1022 20:02:23.812908    3135 log.go:172] (0xc000d8e420) (0xc0008d06e0) Stream added, broadcasting: 5\nI1022 20:02:23.813994    3135 log.go:172] (0xc000d8e420) Reply frame received for 5\nI1022 20:02:23.814031    3135 log.go:172] (0xc000d8e420) (0xc000710320) Create stream\nI1022 20:02:23.814041    3135 log.go:172] (0xc000d8e420) (0xc000710320) Stream added, broadcasting: 7\nI1022 20:02:23.815799    3135 log.go:172] (0xc000d8e420) Reply frame received for 7\nI1022 20:02:23.815905    3135 log.go:172] (0xc000690c80) (3) Writing data frame\nI1022 20:02:23.816018    3135 log.go:172] (0xc000690c80) (3) Writing data frame\nI1022 20:02:23.817165    3135 log.go:172] (0xc000d8e420) Data frame received for 5\nI1022 20:02:23.817187    3135 log.go:172] (0xc0008d06e0) (5) Data frame handling\nI1022 20:02:23.817199    3135 log.go:172] (0xc0008d06e0) (5) Data frame sent\nI1022 20:02:23.817556    3135 log.go:172] (0xc000d8e420) Data frame received for 5\nI1022 20:02:23.817576    3135 log.go:172] (0xc0008d06e0) (5) Data frame handling\nI1022 20:02:23.817587    3135 log.go:172] (0xc0008d06e0) (5) Data frame sent\nI1022 20:02:23.862110    3135 log.go:172] (0xc000d8e420) Data frame received for 7\nI1022 20:02:23.862149    3135 log.go:172] (0xc000710320) (7) Data frame handling\nI1022 20:02:23.862180    3135 log.go:172] (0xc000d8e420) Data frame received for 5\nI1022 20:02:23.862211    3135 log.go:172] (0xc0008d06e0) (5) Data frame handling\nI1022 20:02:23.862446    3135 log.go:172] (0xc000d8e420) Data frame received for 1\nI1022 20:02:23.862466    3135 log.go:172] (0xc000710280) (1) Data frame handling\nI1022 20:02:23.862481    3135 log.go:172] (0xc000710280) (1) Data frame sent\nI1022 20:02:23.862496    3135 log.go:172] (0xc000d8e420) (0xc000710280) Stream removed, broadcasting: 1\nI1022 20:02:23.862600    3135 log.go:172] (0xc000d8e420) (0xc000710280) Stream removed, broadcasting: 1\nI1022 20:02:23.862641    3135 log.go:172] (0xc000d8e420) (0xc000690c80) Stream removed, broadcasting: 3\nI1022 20:02:23.862696    3135 log.go:172] (0xc000d8e420) Go away received\nI1022 20:02:23.862739    3135 log.go:172] (0xc000d8e420) (0xc0008d06e0) Stream removed, broadcasting: 5\nI1022 20:02:23.862770    3135 log.go:172] (0xc000d8e420) (0xc000710320) Stream removed, broadcasting: 7\n"
Oct 22 20:02:23.897: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:02:25.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9626" for this suite.
Oct 22 20:02:31.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:02:32.014: INFO: namespace kubectl-9626 deletion completed in 6.108802258s

• [SLOW TEST:15.396 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:02:32.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 20:02:32.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb7d2410-5016-4ca4-8065-7faaf5deab90" in namespace "projected-8767" to be "success or failure"
Oct 22 20:02:32.253: INFO: Pod "downwardapi-volume-bb7d2410-5016-4ca4-8065-7faaf5deab90": Phase="Pending", Reason="", readiness=false. Elapsed: 15.17588ms
Oct 22 20:02:34.256: INFO: Pod "downwardapi-volume-bb7d2410-5016-4ca4-8065-7faaf5deab90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018945046s
Oct 22 20:02:36.260: INFO: Pod "downwardapi-volume-bb7d2410-5016-4ca4-8065-7faaf5deab90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022175819s
STEP: Saw pod success
Oct 22 20:02:36.260: INFO: Pod "downwardapi-volume-bb7d2410-5016-4ca4-8065-7faaf5deab90" satisfied condition "success or failure"
Oct 22 20:02:36.262: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bb7d2410-5016-4ca4-8065-7faaf5deab90 container client-container: 
STEP: delete the pod
Oct 22 20:02:36.463: INFO: Waiting for pod downwardapi-volume-bb7d2410-5016-4ca4-8065-7faaf5deab90 to disappear
Oct 22 20:02:36.493: INFO: Pod downwardapi-volume-bb7d2410-5016-4ca4-8065-7faaf5deab90 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:02:36.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8767" for this suite.
Oct 22 20:02:42.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:02:42.592: INFO: namespace projected-8767 deletion completed in 6.095830784s

• [SLOW TEST:10.578 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:02:42.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct 22 20:02:47.232: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d0c62bbd-b3cc-4239-b026-9b078562e961"
Oct 22 20:02:47.232: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d0c62bbd-b3cc-4239-b026-9b078562e961" in namespace "pods-3291" to be "terminated due to deadline exceeded"
Oct 22 20:02:47.242: INFO: Pod "pod-update-activedeadlineseconds-d0c62bbd-b3cc-4239-b026-9b078562e961": Phase="Running", Reason="", readiness=true. Elapsed: 9.350291ms
Oct 22 20:02:49.246: INFO: Pod "pod-update-activedeadlineseconds-d0c62bbd-b3cc-4239-b026-9b078562e961": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.013503916s
Oct 22 20:02:49.246: INFO: Pod "pod-update-activedeadlineseconds-d0c62bbd-b3cc-4239-b026-9b078562e961" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:02:49.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3291" for this suite.
Oct 22 20:02:55.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:02:55.334: INFO: namespace pods-3291 deletion completed in 6.084256104s

• [SLOW TEST:12.742 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:02:55.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:03:55.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6451" for this suite.
Oct 22 20:04:17.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:04:17.501: INFO: namespace container-probe-6451 deletion completed in 22.08236991s

• [SLOW TEST:82.166 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:04:17.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:04:50.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7759" for this suite.
Oct 22 20:04:56.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:04:56.381: INFO: namespace container-runtime-7759 deletion completed in 6.139585894s

• [SLOW TEST:38.880 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:04:56.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 20:04:56.470: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2efc1cc8-2e9d-4bdd-a371-89c5bafada39" in namespace "projected-2411" to be "success or failure"
Oct 22 20:04:56.483: INFO: Pod "downwardapi-volume-2efc1cc8-2e9d-4bdd-a371-89c5bafada39": Phase="Pending", Reason="", readiness=false. Elapsed: 12.362814ms
Oct 22 20:04:58.552: INFO: Pod "downwardapi-volume-2efc1cc8-2e9d-4bdd-a371-89c5bafada39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081622821s
Oct 22 20:05:00.556: INFO: Pod "downwardapi-volume-2efc1cc8-2e9d-4bdd-a371-89c5bafada39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085786912s
STEP: Saw pod success
Oct 22 20:05:00.556: INFO: Pod "downwardapi-volume-2efc1cc8-2e9d-4bdd-a371-89c5bafada39" satisfied condition "success or failure"
Oct 22 20:05:00.559: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2efc1cc8-2e9d-4bdd-a371-89c5bafada39 container client-container: 
STEP: delete the pod
Oct 22 20:05:00.605: INFO: Waiting for pod downwardapi-volume-2efc1cc8-2e9d-4bdd-a371-89c5bafada39 to disappear
Oct 22 20:05:00.639: INFO: Pod downwardapi-volume-2efc1cc8-2e9d-4bdd-a371-89c5bafada39 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:05:00.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2411" for this suite.
Oct 22 20:05:06.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:05:06.830: INFO: namespace projected-2411 deletion completed in 6.187089822s

• [SLOW TEST:10.449 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:05:06.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-4zrn
STEP: Creating a pod to test atomic-volume-subpath
Oct 22 20:05:06.952: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4zrn" in namespace "subpath-1627" to be "success or failure"
Oct 22 20:05:06.956: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.834564ms
Oct 22 20:05:10.506: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Pending", Reason="", readiness=false. Elapsed: 3.554172805s
Oct 22 20:05:12.511: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Running", Reason="", readiness=true. Elapsed: 5.558561216s
Oct 22 20:05:14.515: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Running", Reason="", readiness=true. Elapsed: 7.563287155s
Oct 22 20:05:16.520: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Running", Reason="", readiness=true. Elapsed: 9.567606082s
Oct 22 20:05:18.524: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Running", Reason="", readiness=true. Elapsed: 11.571721375s
Oct 22 20:05:20.527: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Running", Reason="", readiness=true. Elapsed: 13.574945769s
Oct 22 20:05:22.531: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Running", Reason="", readiness=true. Elapsed: 15.57904111s
Oct 22 20:05:24.535: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Running", Reason="", readiness=true. Elapsed: 17.583248266s
Oct 22 20:05:26.540: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Running", Reason="", readiness=true. Elapsed: 19.587602272s
Oct 22 20:05:28.544: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Running", Reason="", readiness=true. Elapsed: 21.591806719s
Oct 22 20:05:30.548: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Running", Reason="", readiness=true. Elapsed: 23.595630341s
Oct 22 20:05:32.552: INFO: Pod "pod-subpath-test-configmap-4zrn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.599452372s
STEP: Saw pod success
Oct 22 20:05:32.552: INFO: Pod "pod-subpath-test-configmap-4zrn" satisfied condition "success or failure"
Oct 22 20:05:32.554: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-4zrn container test-container-subpath-configmap-4zrn: 
STEP: delete the pod
Oct 22 20:05:32.605: INFO: Waiting for pod pod-subpath-test-configmap-4zrn to disappear
Oct 22 20:05:32.631: INFO: Pod pod-subpath-test-configmap-4zrn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-4zrn
Oct 22 20:05:32.631: INFO: Deleting pod "pod-subpath-test-configmap-4zrn" in namespace "subpath-1627"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:05:32.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1627" for this suite.
Oct 22 20:05:38.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:05:38.800: INFO: namespace subpath-1627 deletion completed in 6.162796934s

• [SLOW TEST:31.968 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:05:38.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Oct 22 20:05:46.956: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct 22 20:05:46.963: INFO: Pod pod-with-poststart-http-hook still exists
Oct 22 20:05:48.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct 22 20:05:48.967: INFO: Pod pod-with-poststart-http-hook still exists
Oct 22 20:05:50.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct 22 20:05:50.967: INFO: Pod pod-with-poststart-http-hook still exists
Oct 22 20:05:52.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct 22 20:05:52.967: INFO: Pod pod-with-poststart-http-hook still exists
Oct 22 20:05:54.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct 22 20:05:54.967: INFO: Pod pod-with-poststart-http-hook still exists
Oct 22 20:05:56.963: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Oct 22 20:05:56.967: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:05:56.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7138" for this suite.
Oct 22 20:06:18.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:06:19.064: INFO: namespace container-lifecycle-hook-7138 deletion completed in 22.092085969s

• [SLOW TEST:40.263 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:06:19.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Oct 22 20:06:19.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7573'
Oct 22 20:06:19.514: INFO: stderr: ""
Oct 22 20:06:19.514: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Oct 22 20:06:20.518: INFO: Selector matched 1 pods for map[app:redis]
Oct 22 20:06:20.518: INFO: Found 0 / 1
Oct 22 20:06:21.600: INFO: Selector matched 1 pods for map[app:redis]
Oct 22 20:06:21.600: INFO: Found 0 / 1
Oct 22 20:06:22.518: INFO: Selector matched 1 pods for map[app:redis]
Oct 22 20:06:22.518: INFO: Found 1 / 1
Oct 22 20:06:22.518: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Oct 22 20:06:22.522: INFO: Selector matched 1 pods for map[app:redis]
Oct 22 20:06:22.522: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Oct 22 20:06:22.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-wrp84 --namespace=kubectl-7573 -p {"metadata":{"annotations":{"x":"y"}}}'
Oct 22 20:06:22.623: INFO: stderr: ""
Oct 22 20:06:22.623: INFO: stdout: "pod/redis-master-wrp84 patched\n"
STEP: checking annotations
Oct 22 20:06:22.626: INFO: Selector matched 1 pods for map[app:redis]
Oct 22 20:06:22.626: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:06:22.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7573" for this suite.
Oct 22 20:06:44.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:06:44.737: INFO: namespace kubectl-7573 deletion completed in 22.10841117s

• [SLOW TEST:25.674 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:06:44.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Oct 22 20:06:45.329: INFO: created pod pod-service-account-defaultsa
Oct 22 20:06:45.329: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Oct 22 20:06:45.335: INFO: created pod pod-service-account-mountsa
Oct 22 20:06:45.336: INFO: pod pod-service-account-mountsa service account token volume mount: true
Oct 22 20:06:45.355: INFO: created pod pod-service-account-nomountsa
Oct 22 20:06:45.355: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Oct 22 20:06:45.379: INFO: created pod pod-service-account-defaultsa-mountspec
Oct 22 20:06:45.379: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Oct 22 20:06:45.410: INFO: created pod pod-service-account-mountsa-mountspec
Oct 22 20:06:45.411: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Oct 22 20:06:45.426: INFO: created pod pod-service-account-nomountsa-mountspec
Oct 22 20:06:45.426: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Oct 22 20:06:45.447: INFO: created pod pod-service-account-defaultsa-nomountspec
Oct 22 20:06:45.447: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Oct 22 20:06:45.506: INFO: created pod pod-service-account-mountsa-nomountspec
Oct 22 20:06:45.506: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Oct 22 20:06:45.531: INFO: created pod pod-service-account-nomountsa-nomountspec
Oct 22 20:06:45.531: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:06:45.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1285" for this suite.
Oct 22 20:07:13.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:07:13.809: INFO: namespace svcaccounts-1285 deletion completed in 28.221125462s

• [SLOW TEST:29.070 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:07:13.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-7058, will wait for the garbage collector to delete the pods
Oct 22 20:07:17.946: INFO: Deleting Job.batch foo took: 6.580038ms
Oct 22 20:07:18.247: INFO: Terminating Job.batch foo pods took: 300.317912ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:07:55.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7058" for this suite.
Oct 22 20:08:01.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:08:01.858: INFO: namespace job-7058 deletion completed in 6.103265936s

• [SLOW TEST:48.048 seconds]
[sig-apps] Job
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:08:01.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-2cdc467d-5a0f-49e3-9bca-c3b9441f5426
STEP: Creating a pod to test consume configMaps
Oct 22 20:08:01.942: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-beec49ae-47e4-4257-9747-131bbd985a36" in namespace "projected-8708" to be "success or failure"
Oct 22 20:08:01.962: INFO: Pod "pod-projected-configmaps-beec49ae-47e4-4257-9747-131bbd985a36": Phase="Pending", Reason="", readiness=false. Elapsed: 19.608402ms
Oct 22 20:08:03.966: INFO: Pod "pod-projected-configmaps-beec49ae-47e4-4257-9747-131bbd985a36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02360987s
Oct 22 20:08:05.971: INFO: Pod "pod-projected-configmaps-beec49ae-47e4-4257-9747-131bbd985a36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028251933s
STEP: Saw pod success
Oct 22 20:08:05.971: INFO: Pod "pod-projected-configmaps-beec49ae-47e4-4257-9747-131bbd985a36" satisfied condition "success or failure"
Oct 22 20:08:05.974: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-beec49ae-47e4-4257-9747-131bbd985a36 container projected-configmap-volume-test: 
STEP: delete the pod
Oct 22 20:08:06.154: INFO: Waiting for pod pod-projected-configmaps-beec49ae-47e4-4257-9747-131bbd985a36 to disappear
Oct 22 20:08:06.169: INFO: Pod pod-projected-configmaps-beec49ae-47e4-4257-9747-131bbd985a36 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:08:06.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8708" for this suite.
Oct 22 20:08:12.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:08:12.292: INFO: namespace projected-8708 deletion completed in 6.119500338s

• [SLOW TEST:10.434 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:08:12.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Oct 22 20:08:12.363: INFO: Waiting up to 5m0s for pod "client-containers-cd27ccde-7bab-411a-86ad-214bbf9ebeba" in namespace "containers-2966" to be "success or failure"
Oct 22 20:08:12.378: INFO: Pod "client-containers-cd27ccde-7bab-411a-86ad-214bbf9ebeba": Phase="Pending", Reason="", readiness=false. Elapsed: 14.500515ms
Oct 22 20:08:14.382: INFO: Pod "client-containers-cd27ccde-7bab-411a-86ad-214bbf9ebeba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018702085s
Oct 22 20:08:16.589: INFO: Pod "client-containers-cd27ccde-7bab-411a-86ad-214bbf9ebeba": Phase="Running", Reason="", readiness=true. Elapsed: 4.225708091s
Oct 22 20:08:18.594: INFO: Pod "client-containers-cd27ccde-7bab-411a-86ad-214bbf9ebeba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.230180556s
STEP: Saw pod success
Oct 22 20:08:18.594: INFO: Pod "client-containers-cd27ccde-7bab-411a-86ad-214bbf9ebeba" satisfied condition "success or failure"
Oct 22 20:08:18.597: INFO: Trying to get logs from node iruya-worker pod client-containers-cd27ccde-7bab-411a-86ad-214bbf9ebeba container test-container: 
STEP: delete the pod
Oct 22 20:08:18.620: INFO: Waiting for pod client-containers-cd27ccde-7bab-411a-86ad-214bbf9ebeba to disappear
Oct 22 20:08:18.623: INFO: Pod client-containers-cd27ccde-7bab-411a-86ad-214bbf9ebeba no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:08:18.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2966" for this suite.
Oct 22 20:08:26.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:08:26.751: INFO: namespace containers-2966 deletion completed in 8.124525139s

• [SLOW TEST:14.458 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:08:26.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1022 20:09:08.445387       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Oct 22 20:09:08.445: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:09:08.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6467" for this suite.
Oct 22 20:09:16.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:09:16.534: INFO: namespace gc-6467 deletion completed in 8.085729894s

• [SLOW TEST:49.783 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:09:16.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Oct 22 20:09:21.687: INFO: Successfully updated pod "pod-update-4bdc4118-369f-4352-a196-77c3efc34ad8"
STEP: verifying the updated pod is in kubernetes
Oct 22 20:09:21.707: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:09:21.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2716" for this suite.
Oct 22 20:09:43.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:09:43.822: INFO: namespace pods-2716 deletion completed in 22.111399309s

• [SLOW TEST:27.287 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:09:43.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-e1af3074-d309-4742-b9ae-e5b409bfc696
STEP: Creating a pod to test consume configMaps
Oct 22 20:09:43.886: INFO: Waiting up to 5m0s for pod "pod-configmaps-0991000d-f187-4de1-ae86-42d1973b18dc" in namespace "configmap-2926" to be "success or failure"
Oct 22 20:09:43.890: INFO: Pod "pod-configmaps-0991000d-f187-4de1-ae86-42d1973b18dc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.725761ms
Oct 22 20:09:45.945: INFO: Pod "pod-configmaps-0991000d-f187-4de1-ae86-42d1973b18dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058633166s
Oct 22 20:09:47.949: INFO: Pod "pod-configmaps-0991000d-f187-4de1-ae86-42d1973b18dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063189085s
STEP: Saw pod success
Oct 22 20:09:47.949: INFO: Pod "pod-configmaps-0991000d-f187-4de1-ae86-42d1973b18dc" satisfied condition "success or failure"
Oct 22 20:09:47.953: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-0991000d-f187-4de1-ae86-42d1973b18dc container configmap-volume-test: 
STEP: delete the pod
Oct 22 20:09:47.990: INFO: Waiting for pod pod-configmaps-0991000d-f187-4de1-ae86-42d1973b18dc to disappear
Oct 22 20:09:47.998: INFO: Pod pod-configmaps-0991000d-f187-4de1-ae86-42d1973b18dc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:09:47.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2926" for this suite.
Oct 22 20:09:54.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:09:54.090: INFO: namespace configmap-2926 deletion completed in 6.088753648s

• [SLOW TEST:10.268 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:09:54.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Oct 22 20:09:54.175: INFO: Waiting up to 5m0s for pod "pod-96c5aa13-de3d-4e61-b8fc-bdc0d2d7af03" in namespace "emptydir-2027" to be "success or failure"
Oct 22 20:09:54.180: INFO: Pod "pod-96c5aa13-de3d-4e61-b8fc-bdc0d2d7af03": Phase="Pending", Reason="", readiness=false. Elapsed: 5.074181ms
Oct 22 20:09:56.184: INFO: Pod "pod-96c5aa13-de3d-4e61-b8fc-bdc0d2d7af03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009894244s
Oct 22 20:09:58.189: INFO: Pod "pod-96c5aa13-de3d-4e61-b8fc-bdc0d2d7af03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014099263s
STEP: Saw pod success
Oct 22 20:09:58.189: INFO: Pod "pod-96c5aa13-de3d-4e61-b8fc-bdc0d2d7af03" satisfied condition "success or failure"
Oct 22 20:09:58.192: INFO: Trying to get logs from node iruya-worker2 pod pod-96c5aa13-de3d-4e61-b8fc-bdc0d2d7af03 container test-container: 
STEP: delete the pod
Oct 22 20:09:58.223: INFO: Waiting for pod pod-96c5aa13-de3d-4e61-b8fc-bdc0d2d7af03 to disappear
Oct 22 20:09:58.245: INFO: Pod pod-96c5aa13-de3d-4e61-b8fc-bdc0d2d7af03 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:09:58.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2027" for this suite.
Oct 22 20:10:04.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:10:04.368: INFO: namespace emptydir-2027 deletion completed in 6.120283494s

• [SLOW TEST:10.278 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:10:04.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Oct 22 20:10:04.455: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Oct 22 20:10:04.462: INFO: Waiting for terminating namespaces to be deleted...
Oct 22 20:10:04.465: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Oct 22 20:10:04.471: INFO: kindnet-7bsvw from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Oct 22 20:10:04.472: INFO: 	Container kindnet-cni ready: true, restart count 0
Oct 22 20:10:04.472: INFO: kube-proxy-mtljr from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Oct 22 20:10:04.472: INFO: 	Container kube-proxy ready: true, restart count 0
Oct 22 20:10:04.472: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Oct 22 20:10:04.477: INFO: kindnet-djqgh from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Oct 22 20:10:04.477: INFO: 	Container kindnet-cni ready: true, restart count 0
Oct 22 20:10:04.477: INFO: kube-proxy-52wt5 from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Oct 22 20:10:04.477: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Oct 22 20:10:04.581: INFO: Pod kindnet-7bsvw requesting resource cpu=100m on Node iruya-worker
Oct 22 20:10:04.581: INFO: Pod kindnet-djqgh requesting resource cpu=100m on Node iruya-worker2
Oct 22 20:10:04.581: INFO: Pod kube-proxy-52wt5 requesting resource cpu=0m on Node iruya-worker2
Oct 22 20:10:04.581: INFO: Pod kube-proxy-mtljr requesting resource cpu=0m on Node iruya-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-47269479-9719-4ece-b5d8-8091e2d0e100.16406971bed7db86], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5971/filler-pod-47269479-9719-4ece-b5d8-8091e2d0e100 to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-47269479-9719-4ece-b5d8-8091e2d0e100.16406972361f480c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-47269479-9719-4ece-b5d8-8091e2d0e100.164069726f04a74d], Reason = [Created], Message = [Created container filler-pod-47269479-9719-4ece-b5d8-8091e2d0e100]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-47269479-9719-4ece-b5d8-8091e2d0e100.16406972809be21f], Reason = [Started], Message = [Started container filler-pod-47269479-9719-4ece-b5d8-8091e2d0e100]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4e31cb37-4d73-4ace-bcaa-c2ff67ba6a3b.16406971bed8f36d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5971/filler-pod-4e31cb37-4d73-4ace-bcaa-c2ff67ba6a3b to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4e31cb37-4d73-4ace-bcaa-c2ff67ba6a3b.1640697208ad29ad], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4e31cb37-4d73-4ace-bcaa-c2ff67ba6a3b.1640697255927654], Reason = [Created], Message = [Created container filler-pod-4e31cb37-4d73-4ace-bcaa-c2ff67ba6a3b]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4e31cb37-4d73-4ace-bcaa-c2ff67ba6a3b.164069726f05c779], Reason = [Started], Message = [Started container filler-pod-4e31cb37-4d73-4ace-bcaa-c2ff67ba6a3b]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.16406973256be44d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:10:12.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5971" for this suite.
Oct 22 20:10:19.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:10:19.336: INFO: namespace sched-pred-5971 deletion completed in 6.463780793s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:14.968 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:10:19.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 22 20:10:19.720: INFO: Waiting up to 5m0s for pod "pod-2b2a8618-b976-4363-928d-73a33230a904" in namespace "emptydir-7402" to be "success or failure"
Oct 22 20:10:19.738: INFO: Pod "pod-2b2a8618-b976-4363-928d-73a33230a904": Phase="Pending", Reason="", readiness=false. Elapsed: 18.180934ms
Oct 22 20:10:21.742: INFO: Pod "pod-2b2a8618-b976-4363-928d-73a33230a904": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022074452s
Oct 22 20:10:23.746: INFO: Pod "pod-2b2a8618-b976-4363-928d-73a33230a904": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026048209s
STEP: Saw pod success
Oct 22 20:10:23.746: INFO: Pod "pod-2b2a8618-b976-4363-928d-73a33230a904" satisfied condition "success or failure"
Oct 22 20:10:23.748: INFO: Trying to get logs from node iruya-worker pod pod-2b2a8618-b976-4363-928d-73a33230a904 container test-container: 
STEP: delete the pod
Oct 22 20:10:23.769: INFO: Waiting for pod pod-2b2a8618-b976-4363-928d-73a33230a904 to disappear
Oct 22 20:10:23.773: INFO: Pod pod-2b2a8618-b976-4363-928d-73a33230a904 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:10:23.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7402" for this suite.
Oct 22 20:10:29.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:10:29.865: INFO: namespace emptydir-7402 deletion completed in 6.088940944s

• [SLOW TEST:10.528 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:10:29.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1344
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 22 20:10:29.917: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Oct 22 20:10:58.057: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.61:8080/dial?request=hostName&protocol=http&host=10.244.2.60&port=8080&tries=1'] Namespace:pod-network-test-1344 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 20:10:58.057: INFO: >>> kubeConfig: /root/.kube/config
I1022 20:10:58.097586       6 log.go:172] (0xc0014d4630) (0xc003818320) Create stream
I1022 20:10:58.097677       6 log.go:172] (0xc0014d4630) (0xc003818320) Stream added, broadcasting: 1
I1022 20:10:58.101169       6 log.go:172] (0xc0014d4630) Reply frame received for 1
I1022 20:10:58.101219       6 log.go:172] (0xc0014d4630) (0xc0017280a0) Create stream
I1022 20:10:58.101235       6 log.go:172] (0xc0014d4630) (0xc0017280a0) Stream added, broadcasting: 3
I1022 20:10:58.104779       6 log.go:172] (0xc0014d4630) Reply frame received for 3
I1022 20:10:58.104802       6 log.go:172] (0xc0014d4630) (0xc0038183c0) Create stream
I1022 20:10:58.104809       6 log.go:172] (0xc0014d4630) (0xc0038183c0) Stream added, broadcasting: 5
I1022 20:10:58.105763       6 log.go:172] (0xc0014d4630) Reply frame received for 5
I1022 20:10:58.229890       6 log.go:172] (0xc0014d4630) Data frame received for 3
I1022 20:10:58.229931       6 log.go:172] (0xc0017280a0) (3) Data frame handling
I1022 20:10:58.229964       6 log.go:172] (0xc0017280a0) (3) Data frame sent
I1022 20:10:58.231217       6 log.go:172] (0xc0014d4630) Data frame received for 3
I1022 20:10:58.231243       6 log.go:172] (0xc0017280a0) (3) Data frame handling
I1022 20:10:58.231291       6 log.go:172] (0xc0014d4630) Data frame received for 5
I1022 20:10:58.231334       6 log.go:172] (0xc0038183c0) (5) Data frame handling
I1022 20:10:58.233251       6 log.go:172] (0xc0014d4630) Data frame received for 1
I1022 20:10:58.233329       6 log.go:172] (0xc003818320) (1) Data frame handling
I1022 20:10:58.233376       6 log.go:172] (0xc003818320) (1) Data frame sent
I1022 20:10:58.233396       6 log.go:172] (0xc0014d4630) (0xc003818320) Stream removed, broadcasting: 1
I1022 20:10:58.233434       6 log.go:172] (0xc0014d4630) Go away received
I1022 20:10:58.233544       6 log.go:172] (0xc0014d4630) (0xc003818320) Stream removed, broadcasting: 1
I1022 20:10:58.233576       6 log.go:172] (0xc0014d4630) (0xc0017280a0) Stream removed, broadcasting: 3
I1022 20:10:58.233601       6 log.go:172] (0xc0014d4630) (0xc0038183c0) Stream removed, broadcasting: 5
Oct 22 20:10:58.233: INFO: Waiting for endpoints: map[]
Oct 22 20:10:58.237: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.61:8080/dial?request=hostName&protocol=http&host=10.244.1.21&port=8080&tries=1'] Namespace:pod-network-test-1344 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 20:10:58.237: INFO: >>> kubeConfig: /root/.kube/config
I1022 20:10:58.261217       6 log.go:172] (0xc002afa9a0) (0xc0019e4f00) Create stream
I1022 20:10:58.261247       6 log.go:172] (0xc002afa9a0) (0xc0019e4f00) Stream added, broadcasting: 1
I1022 20:10:58.263386       6 log.go:172] (0xc002afa9a0) Reply frame received for 1
I1022 20:10:58.263475       6 log.go:172] (0xc002afa9a0) (0xc00173ea00) Create stream
I1022 20:10:58.263487       6 log.go:172] (0xc002afa9a0) (0xc00173ea00) Stream added, broadcasting: 3
I1022 20:10:58.264349       6 log.go:172] (0xc002afa9a0) Reply frame received for 3
I1022 20:10:58.264387       6 log.go:172] (0xc002afa9a0) (0xc00173eaa0) Create stream
I1022 20:10:58.264396       6 log.go:172] (0xc002afa9a0) (0xc00173eaa0) Stream added, broadcasting: 5
I1022 20:10:58.265548       6 log.go:172] (0xc002afa9a0) Reply frame received for 5
I1022 20:10:58.336163       6 log.go:172] (0xc002afa9a0) Data frame received for 3
I1022 20:10:58.336195       6 log.go:172] (0xc00173ea00) (3) Data frame handling
I1022 20:10:58.336216       6 log.go:172] (0xc00173ea00) (3) Data frame sent
I1022 20:10:58.337097       6 log.go:172] (0xc002afa9a0) Data frame received for 3
I1022 20:10:58.337143       6 log.go:172] (0xc00173ea00) (3) Data frame handling
I1022 20:10:58.337213       6 log.go:172] (0xc002afa9a0) Data frame received for 5
I1022 20:10:58.337231       6 log.go:172] (0xc00173eaa0) (5) Data frame handling
I1022 20:10:58.338657       6 log.go:172] (0xc002afa9a0) Data frame received for 1
I1022 20:10:58.338680       6 log.go:172] (0xc0019e4f00) (1) Data frame handling
I1022 20:10:58.338705       6 log.go:172] (0xc0019e4f00) (1) Data frame sent
I1022 20:10:58.338759       6 log.go:172] (0xc002afa9a0) (0xc0019e4f00) Stream removed, broadcasting: 1
I1022 20:10:58.338875       6 log.go:172] (0xc002afa9a0) (0xc0019e4f00) Stream removed, broadcasting: 1
I1022 20:10:58.338895       6 log.go:172] (0xc002afa9a0) (0xc00173ea00) Stream removed, broadcasting: 3
I1022 20:10:58.338930       6 log.go:172] (0xc002afa9a0) Go away received
I1022 20:10:58.339033       6 log.go:172] (0xc002afa9a0) (0xc00173eaa0) Stream removed, broadcasting: 5
Oct 22 20:10:58.339: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:10:58.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1344" for this suite.
Oct 22 20:11:22.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:11:22.442: INFO: namespace pod-network-test-1344 deletion completed in 24.098968397s

• [SLOW TEST:52.576 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:11:22.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Oct 22 20:11:26.631: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:11:26.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9030" for this suite.
Oct 22 20:11:32.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:11:32.741: INFO: namespace container-runtime-9030 deletion completed in 6.091793965s

• [SLOW TEST:10.299 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:11:32.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 20:11:32.853: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee342cbb-7f59-4a5c-852b-dd5368589f64" in namespace "downward-api-5873" to be "success or failure"
Oct 22 20:11:32.886: INFO: Pod "downwardapi-volume-ee342cbb-7f59-4a5c-852b-dd5368589f64": Phase="Pending", Reason="", readiness=false. Elapsed: 33.182174ms
Oct 22 20:11:34.890: INFO: Pod "downwardapi-volume-ee342cbb-7f59-4a5c-852b-dd5368589f64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037089737s
Oct 22 20:11:36.895: INFO: Pod "downwardapi-volume-ee342cbb-7f59-4a5c-852b-dd5368589f64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041548603s
STEP: Saw pod success
Oct 22 20:11:36.895: INFO: Pod "downwardapi-volume-ee342cbb-7f59-4a5c-852b-dd5368589f64" satisfied condition "success or failure"
Oct 22 20:11:36.898: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ee342cbb-7f59-4a5c-852b-dd5368589f64 container client-container: 
STEP: delete the pod
Oct 22 20:11:36.912: INFO: Waiting for pod downwardapi-volume-ee342cbb-7f59-4a5c-852b-dd5368589f64 to disappear
Oct 22 20:11:36.918: INFO: Pod downwardapi-volume-ee342cbb-7f59-4a5c-852b-dd5368589f64 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:11:36.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5873" for this suite.
Oct 22 20:11:42.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:11:43.042: INFO: namespace downward-api-5873 deletion completed in 6.121022047s

• [SLOW TEST:10.301 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:11:43.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 20:11:43.109: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Oct 22 20:11:43.116: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:43.129: INFO: Number of nodes with available pods: 0
Oct 22 20:11:43.129: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:11:44.134: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:44.137: INFO: Number of nodes with available pods: 0
Oct 22 20:11:44.137: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:11:45.135: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:45.138: INFO: Number of nodes with available pods: 0
Oct 22 20:11:45.138: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:11:46.192: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:46.201: INFO: Number of nodes with available pods: 0
Oct 22 20:11:46.201: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:11:47.135: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:47.138: INFO: Number of nodes with available pods: 2
Oct 22 20:11:47.138: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Oct 22 20:11:47.258: INFO: Wrong image for pod: daemon-set-nrjc8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:47.258: INFO: Wrong image for pod: daemon-set-qjmmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:47.274: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:48.300: INFO: Wrong image for pod: daemon-set-nrjc8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:48.300: INFO: Wrong image for pod: daemon-set-qjmmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:48.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:49.278: INFO: Wrong image for pod: daemon-set-nrjc8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:49.278: INFO: Wrong image for pod: daemon-set-qjmmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:49.564: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:50.278: INFO: Wrong image for pod: daemon-set-nrjc8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:50.278: INFO: Wrong image for pod: daemon-set-qjmmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:50.282: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:51.278: INFO: Wrong image for pod: daemon-set-nrjc8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:51.278: INFO: Pod daemon-set-nrjc8 is not available
Oct 22 20:11:51.278: INFO: Wrong image for pod: daemon-set-qjmmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:51.282: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:52.278: INFO: Wrong image for pod: daemon-set-qjmmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:52.278: INFO: Pod daemon-set-r5m2l is not available
Oct 22 20:11:52.287: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:53.278: INFO: Wrong image for pod: daemon-set-qjmmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:53.278: INFO: Pod daemon-set-r5m2l is not available
Oct 22 20:11:53.281: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:54.278: INFO: Wrong image for pod: daemon-set-qjmmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:54.278: INFO: Pod daemon-set-r5m2l is not available
Oct 22 20:11:54.295: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:55.278: INFO: Wrong image for pod: daemon-set-qjmmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:55.281: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:56.278: INFO: Wrong image for pod: daemon-set-qjmmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:56.282: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:57.279: INFO: Wrong image for pod: daemon-set-qjmmc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Oct 22 20:11:57.279: INFO: Pod daemon-set-qjmmc is not available
Oct 22 20:11:57.284: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:58.277: INFO: Pod daemon-set-cgcwx is not available
Oct 22 20:11:58.281: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Oct 22 20:11:58.285: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:58.288: INFO: Number of nodes with available pods: 1
Oct 22 20:11:58.288: INFO: Node iruya-worker2 is running more than one daemon pod
Oct 22 20:11:59.292: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:11:59.296: INFO: Number of nodes with available pods: 1
Oct 22 20:11:59.296: INFO: Node iruya-worker2 is running more than one daemon pod
Oct 22 20:12:00.293: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:12:00.296: INFO: Number of nodes with available pods: 1
Oct 22 20:12:00.296: INFO: Node iruya-worker2 is running more than one daemon pod
Oct 22 20:12:01.295: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:12:01.298: INFO: Number of nodes with available pods: 1
Oct 22 20:12:01.298: INFO: Node iruya-worker2 is running more than one daemon pod
Oct 22 20:12:02.294: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:12:02.297: INFO: Number of nodes with available pods: 2
Oct 22 20:12:02.297: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4764, will wait for the garbage collector to delete the pods
Oct 22 20:12:02.380: INFO: Deleting DaemonSet.extensions daemon-set took: 17.572824ms
Oct 22 20:12:02.680: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.288886ms
Oct 22 20:12:15.684: INFO: Number of nodes with available pods: 0
Oct 22 20:12:15.684: INFO: Number of running nodes: 0, number of available pods: 0
Oct 22 20:12:15.687: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4764/daemonsets","resourceVersion":"5322677"},"items":null}

Oct 22 20:12:15.689: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4764/pods","resourceVersion":"5322677"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:12:15.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4764" for this suite.
Oct 22 20:12:21.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:12:21.823: INFO: namespace daemonsets-4764 deletion completed in 6.097395958s

• [SLOW TEST:38.781 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:12:21.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Oct 22 20:12:21.929: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3084,SelfLink:/api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-label-changed,UID:15c5ffed-bfde-4ac0-b892-671b2093e1b2,ResourceVersion:5322725,Generation:0,CreationTimestamp:2020-10-22 20:12:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Oct 22 20:12:21.929: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3084,SelfLink:/api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-label-changed,UID:15c5ffed-bfde-4ac0-b892-671b2093e1b2,ResourceVersion:5322726,Generation:0,CreationTimestamp:2020-10-22 20:12:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Oct 22 20:12:21.929: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3084,SelfLink:/api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-label-changed,UID:15c5ffed-bfde-4ac0-b892-671b2093e1b2,ResourceVersion:5322727,Generation:0,CreationTimestamp:2020-10-22 20:12:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Oct 22 20:12:31.958: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3084,SelfLink:/api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-label-changed,UID:15c5ffed-bfde-4ac0-b892-671b2093e1b2,ResourceVersion:5322748,Generation:0,CreationTimestamp:2020-10-22 20:12:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Oct 22 20:12:31.958: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3084,SelfLink:/api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-label-changed,UID:15c5ffed-bfde-4ac0-b892-671b2093e1b2,ResourceVersion:5322749,Generation:0,CreationTimestamp:2020-10-22 20:12:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Oct 22 20:12:31.958: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3084,SelfLink:/api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-label-changed,UID:15c5ffed-bfde-4ac0-b892-671b2093e1b2,ResourceVersion:5322750,Generation:0,CreationTimestamp:2020-10-22 20:12:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:12:31.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3084" for this suite.
Oct 22 20:12:37.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:12:38.063: INFO: namespace watch-3084 deletion completed in 6.099069747s

• [SLOW TEST:16.240 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:12:38.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Oct 22 20:12:38.138: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Oct 22 20:12:38.816: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Oct 22 20:12:41.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738994358, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738994358, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738994358, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738994358, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 22 20:12:43.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738994358, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738994358, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738994358, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738994358, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 22 20:12:46.229: INFO: Waited 624.59744ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:12:46.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4842" for this suite.
Oct 22 20:12:52.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:12:53.060: INFO: namespace aggregator-4842 deletion completed in 6.372418899s

• [SLOW TEST:14.997 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:12:53.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Oct 22 20:12:53.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7205'
Oct 22 20:12:56.087: INFO: stderr: ""
Oct 22 20:12:56.087: INFO: stdout: "pod/pause created\n"
Oct 22 20:12:56.087: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Oct 22 20:12:56.087: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7205" to be "running and ready"
Oct 22 20:12:56.094: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.177914ms
Oct 22 20:12:58.187: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100037724s
Oct 22 20:13:00.191: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.104222697s
Oct 22 20:13:00.191: INFO: Pod "pause" satisfied condition "running and ready"
Oct 22 20:13:00.192: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Oct 22 20:13:00.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7205'
Oct 22 20:13:00.292: INFO: stderr: ""
Oct 22 20:13:00.292: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Oct 22 20:13:00.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7205'
Oct 22 20:13:00.388: INFO: stderr: ""
Oct 22 20:13:00.388: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Oct 22 20:13:00.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7205'
Oct 22 20:13:00.481: INFO: stderr: ""
Oct 22 20:13:00.481: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Oct 22 20:13:00.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7205'
Oct 22 20:13:00.574: INFO: stderr: ""
Oct 22 20:13:00.574: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Oct 22 20:13:00.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7205'
Oct 22 20:13:00.706: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Oct 22 20:13:00.706: INFO: stdout: "pod \"pause\" force deleted\n"
Oct 22 20:13:00.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7205'
Oct 22 20:13:00.893: INFO: stderr: "No resources found.\n"
Oct 22 20:13:00.893: INFO: stdout: ""
Oct 22 20:13:00.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7205 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Oct 22 20:13:00.990: INFO: stderr: ""
Oct 22 20:13:00.991: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:13:00.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7205" for this suite.
Oct 22 20:13:07.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:13:07.092: INFO: namespace kubectl-7205 deletion completed in 6.097994225s

• [SLOW TEST:14.031 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:13:07.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Oct 22 20:13:07.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Oct 22 20:13:07.250: INFO: stderr: ""
Oct 22 20:13:07.250: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37711\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37711/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:13:07.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2025" for this suite.
Oct 22 20:13:13.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:13:13.541: INFO: namespace kubectl-2025 deletion completed in 6.112479041s

• [SLOW TEST:6.448 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:13:13.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Oct 22 20:13:18.147: INFO: Successfully updated pod "labelsupdate1d958992-f0b5-45e2-838b-bc92dde0b0a8"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:13:20.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8106" for this suite.
Oct 22 20:13:42.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:13:42.302: INFO: namespace projected-8106 deletion completed in 22.09222049s

• [SLOW TEST:28.761 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:13:42.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Oct 22 20:13:42.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-340'
Oct 22 20:13:42.500: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Oct 22 20:13:42.500: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Oct 22 20:13:42.523: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Oct 22 20:13:42.541: INFO: scanned /root for discovery docs: 
Oct 22 20:13:42.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-340'
Oct 22 20:13:58.429: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Oct 22 20:13:58.429: INFO: stdout: "Created e2e-test-nginx-rc-4fbf7dda205945186186959d35bb2cb6\nScaling up e2e-test-nginx-rc-4fbf7dda205945186186959d35bb2cb6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4fbf7dda205945186186959d35bb2cb6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4fbf7dda205945186186959d35bb2cb6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Oct 22 20:13:58.429: INFO: stdout: "Created e2e-test-nginx-rc-4fbf7dda205945186186959d35bb2cb6\nScaling up e2e-test-nginx-rc-4fbf7dda205945186186959d35bb2cb6 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4fbf7dda205945186186959d35bb2cb6 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4fbf7dda205945186186959d35bb2cb6 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Oct 22 20:13:58.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-340'
Oct 22 20:13:58.531: INFO: stderr: ""
Oct 22 20:13:58.531: INFO: stdout: "e2e-test-nginx-rc-4fbf7dda205945186186959d35bb2cb6-72pjg "
Oct 22 20:13:58.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4fbf7dda205945186186959d35bb2cb6-72pjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-340'
Oct 22 20:13:58.612: INFO: stderr: ""
Oct 22 20:13:58.612: INFO: stdout: "true"
Oct 22 20:13:58.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4fbf7dda205945186186959d35bb2cb6-72pjg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-340'
Oct 22 20:13:58.701: INFO: stderr: ""
Oct 22 20:13:58.701: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Oct 22 20:13:58.701: INFO: e2e-test-nginx-rc-4fbf7dda205945186186959d35bb2cb6-72pjg is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Oct 22 20:13:58.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-340'
Oct 22 20:13:58.803: INFO: stderr: ""
Oct 22 20:13:58.803: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:13:58.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-340" for this suite.
Oct 22 20:14:04.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:14:04.932: INFO: namespace kubectl-340 deletion completed in 6.125468133s

• [SLOW TEST:22.630 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:14:04.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Oct 22 20:14:13.093: INFO: 0 pods remaining
Oct 22 20:14:13.093: INFO: 0 pods has nil DeletionTimestamp
Oct 22 20:14:13.093: INFO: 
Oct 22 20:14:14.542: INFO: 0 pods remaining
Oct 22 20:14:14.542: INFO: 0 pods has nil DeletionTimestamp
Oct 22 20:14:14.542: INFO: 
STEP: Gathering metrics
W1022 20:14:15.626404       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Oct 22 20:14:15.626: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:14:15.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-684" for this suite.
Oct 22 20:14:21.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:14:21.744: INFO: namespace gc-684 deletion completed in 6.113812964s

• [SLOW TEST:16.810 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:14:21.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Oct 22 20:14:21.864: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:21.880: INFO: Number of nodes with available pods: 0
Oct 22 20:14:21.880: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:14:22.885: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:22.889: INFO: Number of nodes with available pods: 0
Oct 22 20:14:22.889: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:14:23.885: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:23.889: INFO: Number of nodes with available pods: 0
Oct 22 20:14:23.889: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:14:24.936: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:24.939: INFO: Number of nodes with available pods: 0
Oct 22 20:14:24.939: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:14:25.886: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:25.890: INFO: Number of nodes with available pods: 2
Oct 22 20:14:25.890: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Oct 22 20:14:25.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:25.913: INFO: Number of nodes with available pods: 1
Oct 22 20:14:25.913: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:14:26.918: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:26.921: INFO: Number of nodes with available pods: 1
Oct 22 20:14:26.921: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:14:27.918: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:27.921: INFO: Number of nodes with available pods: 1
Oct 22 20:14:27.921: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:14:28.917: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:28.920: INFO: Number of nodes with available pods: 1
Oct 22 20:14:28.920: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:14:29.933: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:30.051: INFO: Number of nodes with available pods: 1
Oct 22 20:14:30.051: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:14:30.920: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:30.924: INFO: Number of nodes with available pods: 1
Oct 22 20:14:30.924: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:14:31.919: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:31.921: INFO: Number of nodes with available pods: 1
Oct 22 20:14:31.921: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:14:32.937: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Oct 22 20:14:32.940: INFO: Number of nodes with available pods: 2
Oct 22 20:14:32.940: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5414, will wait for the garbage collector to delete the pods
Oct 22 20:14:33.011: INFO: Deleting DaemonSet.extensions daemon-set took: 16.031509ms
Oct 22 20:14:33.112: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.254847ms
Oct 22 20:14:45.733: INFO: Number of nodes with available pods: 0
Oct 22 20:14:45.733: INFO: Number of running nodes: 0, number of available pods: 0
Oct 22 20:14:45.735: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5414/daemonsets","resourceVersion":"5323451"},"items":null}

Oct 22 20:14:45.738: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5414/pods","resourceVersion":"5323451"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:14:45.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5414" for this suite.
Oct 22 20:14:51.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:14:51.838: INFO: namespace daemonsets-5414 deletion completed in 6.087982324s

• [SLOW TEST:30.094 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:14:51.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-b4bf3d03-5de2-4cf9-b1ad-b9af2ffd766b
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-b4bf3d03-5de2-4cf9-b1ad-b9af2ffd766b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:14:58.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8964" for this suite.
Oct 22 20:15:20.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:15:20.125: INFO: namespace projected-8964 deletion completed in 22.095983595s

• [SLOW TEST:28.287 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:15:20.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-dde35e40-576c-4ca3-b11b-92ddd77c945b
STEP: Creating a pod to test consume configMaps
Oct 22 20:15:20.207: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ad8bde54-9b66-47f2-9e36-b7b264b7fa9a" in namespace "projected-5661" to be "success or failure"
Oct 22 20:15:20.209: INFO: Pod "pod-projected-configmaps-ad8bde54-9b66-47f2-9e36-b7b264b7fa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042192ms
Oct 22 20:15:22.213: INFO: Pod "pod-projected-configmaps-ad8bde54-9b66-47f2-9e36-b7b264b7fa9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006296718s
Oct 22 20:15:24.217: INFO: Pod "pod-projected-configmaps-ad8bde54-9b66-47f2-9e36-b7b264b7fa9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010542252s
STEP: Saw pod success
Oct 22 20:15:24.217: INFO: Pod "pod-projected-configmaps-ad8bde54-9b66-47f2-9e36-b7b264b7fa9a" satisfied condition "success or failure"
Oct 22 20:15:24.221: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-ad8bde54-9b66-47f2-9e36-b7b264b7fa9a container projected-configmap-volume-test: 
STEP: delete the pod
Oct 22 20:15:24.329: INFO: Waiting for pod pod-projected-configmaps-ad8bde54-9b66-47f2-9e36-b7b264b7fa9a to disappear
Oct 22 20:15:24.352: INFO: Pod pod-projected-configmaps-ad8bde54-9b66-47f2-9e36-b7b264b7fa9a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:15:24.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5661" for this suite.
Oct 22 20:15:30.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:15:30.467: INFO: namespace projected-5661 deletion completed in 6.110934793s

• [SLOW TEST:10.341 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:15:30.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Oct 22 20:15:35.090: INFO: Successfully updated pod "labelsupdate276c8523-ee41-46b5-8c7d-d9160ea2359c"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:15:37.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1169" for this suite.
Oct 22 20:15:57.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:15:57.246: INFO: namespace downward-api-1169 deletion completed in 20.128099227s

• [SLOW TEST:26.779 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:15:57.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Oct 22 20:15:57.326: INFO: Pod name pod-release: Found 0 pods out of 1
Oct 22 20:16:02.330: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:16:03.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7333" for this suite.
Oct 22 20:16:09.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:16:09.721: INFO: namespace replication-controller-7333 deletion completed in 6.3648581s

• [SLOW TEST:12.474 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:16:09.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 20:16:09.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:16:15.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2288" for this suite.
Oct 22 20:16:55.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:16:56.052: INFO: namespace pods-2288 deletion completed in 40.109690287s

• [SLOW TEST:46.331 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:16:56.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-4f046468-d097-4dd6-8670-7165b6f175a8
STEP: Creating a pod to test consume secrets
Oct 22 20:16:56.130: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aeca8dc1-3cb4-4db4-a558-4c07f54bd0ca" in namespace "projected-6875" to be "success or failure"
Oct 22 20:16:56.133: INFO: Pod "pod-projected-secrets-aeca8dc1-3cb4-4db4-a558-4c07f54bd0ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.565481ms
Oct 22 20:16:58.138: INFO: Pod "pod-projected-secrets-aeca8dc1-3cb4-4db4-a558-4c07f54bd0ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00786581s
Oct 22 20:17:00.142: INFO: Pod "pod-projected-secrets-aeca8dc1-3cb4-4db4-a558-4c07f54bd0ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012323451s
STEP: Saw pod success
Oct 22 20:17:00.142: INFO: Pod "pod-projected-secrets-aeca8dc1-3cb4-4db4-a558-4c07f54bd0ca" satisfied condition "success or failure"
Oct 22 20:17:00.145: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-aeca8dc1-3cb4-4db4-a558-4c07f54bd0ca container projected-secret-volume-test: 
STEP: delete the pod
Oct 22 20:17:00.250: INFO: Waiting for pod pod-projected-secrets-aeca8dc1-3cb4-4db4-a558-4c07f54bd0ca to disappear
Oct 22 20:17:00.364: INFO: Pod pod-projected-secrets-aeca8dc1-3cb4-4db4-a558-4c07f54bd0ca no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:17:00.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6875" for this suite.
Oct 22 20:17:06.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:17:06.445: INFO: namespace projected-6875 deletion completed in 6.076941036s

• [SLOW TEST:10.393 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:17:06.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 20:17:06.531: INFO: Creating deployment "test-recreate-deployment"
Oct 22 20:17:06.535: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Oct 22 20:17:06.565: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Oct 22 20:17:08.644: INFO: Waiting deployment "test-recreate-deployment" to complete
Oct 22 20:17:08.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738994626, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738994626, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738994626, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738994626, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 22 20:17:10.650: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Oct 22 20:17:10.655: INFO: Updating deployment test-recreate-deployment
Oct 22 20:17:10.655: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Oct 22 20:17:10.945: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9710,SelfLink:/apis/apps/v1/namespaces/deployment-9710/deployments/test-recreate-deployment,UID:e8ddfe3c-03a3-4998-a6cf-bade805b9568,ResourceVersion:5323976,Generation:2,CreationTimestamp:2020-10-22 20:17:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-10-22 20:17:10 +0000 UTC 2020-10-22 20:17:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-10-22 20:17:10 +0000 UTC 2020-10-22 20:17:06 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Oct 22 20:17:11.073: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-9710,SelfLink:/apis/apps/v1/namespaces/deployment-9710/replicasets/test-recreate-deployment-5c8c9cc69d,UID:da63a4d5-a010-44bf-9de3-db18006a5a92,ResourceVersion:5323975,Generation:1,CreationTimestamp:2020-10-22 20:17:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e8ddfe3c-03a3-4998-a6cf-bade805b9568 0xc002e61e87 0xc002e61e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Oct 22 20:17:11.073: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Oct 22 20:17:11.073: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-9710,SelfLink:/apis/apps/v1/namespaces/deployment-9710/replicasets/test-recreate-deployment-6df85df6b9,UID:3798bff5-2e77-48c3-9a12-b67710818e43,ResourceVersion:5323965,Generation:2,CreationTimestamp:2020-10-22 20:17:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e8ddfe3c-03a3-4998-a6cf-bade805b9568 0xc000344137 0xc000344138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Oct 22 20:17:11.077: INFO: Pod "test-recreate-deployment-5c8c9cc69d-rmzh9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-rmzh9,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-9710,SelfLink:/api/v1/namespaces/deployment-9710/pods/test-recreate-deployment-5c8c9cc69d-rmzh9,UID:04b02764-99bd-4dbe-9890-cdc65e03d52c,ResourceVersion:5323977,Generation:0,CreationTimestamp:2020-10-22 20:17:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d da63a4d5-a010-44bf-9de3-db18006a5a92 0xc0004335c7 0xc0004335c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hf22j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hf22j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hf22j true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000433730} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000433760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:17:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:17:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:17:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:17:10 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-10-22 20:17:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:17:11.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9710" for this suite.
Oct 22 20:17:17.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:17:17.202: INFO: namespace deployment-9710 deletion completed in 6.12022706s

• [SLOW TEST:10.756 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:17:17.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Oct 22 20:17:17.385: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:17:17.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3869" for this suite.
Oct 22 20:17:23.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:17:23.630: INFO: namespace kubectl-3869 deletion completed in 6.136742791s

• [SLOW TEST:6.428 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:17:23.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-acb58b3a-5d8d-4e60-994c-ce39917c0b7b
STEP: Creating a pod to test consume secrets
Oct 22 20:17:23.826: INFO: Waiting up to 5m0s for pod "pod-secrets-b549fe30-bed2-4e43-8a37-8ac24af03212" in namespace "secrets-2718" to be "success or failure"
Oct 22 20:17:23.835: INFO: Pod "pod-secrets-b549fe30-bed2-4e43-8a37-8ac24af03212": Phase="Pending", Reason="", readiness=false. Elapsed: 9.537444ms
Oct 22 20:17:25.839: INFO: Pod "pod-secrets-b549fe30-bed2-4e43-8a37-8ac24af03212": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013158656s
Oct 22 20:17:27.843: INFO: Pod "pod-secrets-b549fe30-bed2-4e43-8a37-8ac24af03212": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016896121s
STEP: Saw pod success
Oct 22 20:17:27.843: INFO: Pod "pod-secrets-b549fe30-bed2-4e43-8a37-8ac24af03212" satisfied condition "success or failure"
Oct 22 20:17:27.846: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b549fe30-bed2-4e43-8a37-8ac24af03212 container secret-volume-test: 
STEP: delete the pod
Oct 22 20:17:27.860: INFO: Waiting for pod pod-secrets-b549fe30-bed2-4e43-8a37-8ac24af03212 to disappear
Oct 22 20:17:27.879: INFO: Pod pod-secrets-b549fe30-bed2-4e43-8a37-8ac24af03212 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:17:27.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2718" for this suite.
Oct 22 20:17:33.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:17:33.983: INFO: namespace secrets-2718 deletion completed in 6.101184899s
STEP: Destroying namespace "secret-namespace-9811" for this suite.
Oct 22 20:17:39.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:17:40.069: INFO: namespace secret-namespace-9811 deletion completed in 6.085270729s

• [SLOW TEST:16.438 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:17:40.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Oct 22 20:17:40.138: INFO: Waiting up to 5m0s for pod "pod-1c9de28d-ae3b-4cbe-ad6b-ee4d3a23b6dc" in namespace "emptydir-419" to be "success or failure"
Oct 22 20:17:40.154: INFO: Pod "pod-1c9de28d-ae3b-4cbe-ad6b-ee4d3a23b6dc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.677325ms
Oct 22 20:17:42.158: INFO: Pod "pod-1c9de28d-ae3b-4cbe-ad6b-ee4d3a23b6dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020312794s
Oct 22 20:17:44.163: INFO: Pod "pod-1c9de28d-ae3b-4cbe-ad6b-ee4d3a23b6dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025131198s
STEP: Saw pod success
Oct 22 20:17:44.163: INFO: Pod "pod-1c9de28d-ae3b-4cbe-ad6b-ee4d3a23b6dc" satisfied condition "success or failure"
Oct 22 20:17:44.166: INFO: Trying to get logs from node iruya-worker pod pod-1c9de28d-ae3b-4cbe-ad6b-ee4d3a23b6dc container test-container: 
STEP: delete the pod
Oct 22 20:17:44.192: INFO: Waiting for pod pod-1c9de28d-ae3b-4cbe-ad6b-ee4d3a23b6dc to disappear
Oct 22 20:17:44.208: INFO: Pod pod-1c9de28d-ae3b-4cbe-ad6b-ee4d3a23b6dc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:17:44.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-419" for this suite.
Oct 22 20:17:50.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:17:50.383: INFO: namespace emptydir-419 deletion completed in 6.094706271s

• [SLOW TEST:10.314 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:17:50.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 20:17:50.478: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Oct 22 20:17:50.483: INFO: Number of nodes with available pods: 0
Oct 22 20:17:50.483: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Oct 22 20:17:50.543: INFO: Number of nodes with available pods: 0
Oct 22 20:17:50.543: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:17:51.547: INFO: Number of nodes with available pods: 0
Oct 22 20:17:51.547: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:17:52.548: INFO: Number of nodes with available pods: 0
Oct 22 20:17:52.548: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:17:53.547: INFO: Number of nodes with available pods: 1
Oct 22 20:17:53.547: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Oct 22 20:17:53.591: INFO: Number of nodes with available pods: 1
Oct 22 20:17:53.591: INFO: Number of running nodes: 0, number of available pods: 1
Oct 22 20:17:54.595: INFO: Number of nodes with available pods: 0
Oct 22 20:17:54.595: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Oct 22 20:17:54.619: INFO: Number of nodes with available pods: 0
Oct 22 20:17:54.619: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:17:55.623: INFO: Number of nodes with available pods: 0
Oct 22 20:17:55.623: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:17:56.623: INFO: Number of nodes with available pods: 0
Oct 22 20:17:56.623: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:17:57.624: INFO: Number of nodes with available pods: 0
Oct 22 20:17:57.624: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:17:58.623: INFO: Number of nodes with available pods: 0
Oct 22 20:17:58.623: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:17:59.624: INFO: Number of nodes with available pods: 0
Oct 22 20:17:59.624: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:18:00.623: INFO: Number of nodes with available pods: 0
Oct 22 20:18:00.623: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:18:01.623: INFO: Number of nodes with available pods: 0
Oct 22 20:18:01.623: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:18:02.623: INFO: Number of nodes with available pods: 0
Oct 22 20:18:02.623: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:18:03.624: INFO: Number of nodes with available pods: 0
Oct 22 20:18:03.624: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:18:04.624: INFO: Number of nodes with available pods: 0
Oct 22 20:18:04.624: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:18:05.623: INFO: Number of nodes with available pods: 0
Oct 22 20:18:05.623: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:18:06.634: INFO: Number of nodes with available pods: 0
Oct 22 20:18:06.634: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:18:07.623: INFO: Number of nodes with available pods: 0
Oct 22 20:18:07.623: INFO: Node iruya-worker is running more than one daemon pod
Oct 22 20:18:08.624: INFO: Number of nodes with available pods: 1
Oct 22 20:18:08.624: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5673, will wait for the garbage collector to delete the pods
Oct 22 20:18:08.694: INFO: Deleting DaemonSet.extensions daemon-set took: 11.433882ms
Oct 22 20:18:08.994: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.262944ms
Oct 22 20:18:15.413: INFO: Number of nodes with available pods: 0
Oct 22 20:18:15.413: INFO: Number of running nodes: 0, number of available pods: 0
Oct 22 20:18:15.415: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5673/daemonsets","resourceVersion":"5324243"},"items":null}

Oct 22 20:18:15.416: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5673/pods","resourceVersion":"5324243"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:18:15.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5673" for this suite.
Oct 22 20:18:21.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:18:21.542: INFO: namespace daemonsets-5673 deletion completed in 6.094871498s

• [SLOW TEST:31.158 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:18:21.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Oct 22 20:18:21.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-2691'
Oct 22 20:18:21.723: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Oct 22 20:18:21.723: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Oct 22 20:18:23.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2691'
Oct 22 20:18:24.072: INFO: stderr: ""
Oct 22 20:18:24.072: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:18:24.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2691" for this suite.
Oct 22 20:18:46.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:18:46.799: INFO: namespace kubectl-2691 deletion completed in 22.720211092s

• [SLOW TEST:25.257 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:18:46.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Oct 22 20:18:50.929: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d6adaa9e-a1e5-4a65-aa85-68a5a69dd861,GenerateName:,Namespace:events-8838,SelfLink:/api/v1/namespaces/events-8838/pods/send-events-d6adaa9e-a1e5-4a65-aa85-68a5a69dd861,UID:3e8fdfef-8b85-4de5-ba9e-a5b799ff402a,ResourceVersion:5324373,Generation:0,CreationTimestamp:2020-10-22 20:18:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 882752393,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-fbgj4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fbgj4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-fbgj4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000558230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000558290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:18:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:18:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:18:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:18:46 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.44,StartTime:2020-10-22 20:18:46 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-10-22 20:18:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://77fd4fa3c73d532a18c99757a16db2db12937b13641c68bf2c296b7429cffad1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Oct 22 20:18:52.934: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Oct 22 20:18:54.939: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:18:54.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8838" for this suite.
Oct 22 20:19:36.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:19:37.058: INFO: namespace events-8838 deletion completed in 42.09263804s

• [SLOW TEST:50.259 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:19:37.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f86867c0-00d4-425a-9f83-f4f6ce3ddf97
STEP: Creating a pod to test consume secrets
Oct 22 20:19:37.141: INFO: Waiting up to 5m0s for pod "pod-secrets-99afbb9b-e596-41fe-9ae9-5a4a77c10f30" in namespace "secrets-3941" to be "success or failure"
Oct 22 20:19:37.145: INFO: Pod "pod-secrets-99afbb9b-e596-41fe-9ae9-5a4a77c10f30": Phase="Pending", Reason="", readiness=false. Elapsed: 3.980725ms
Oct 22 20:19:39.150: INFO: Pod "pod-secrets-99afbb9b-e596-41fe-9ae9-5a4a77c10f30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008345665s
Oct 22 20:19:41.153: INFO: Pod "pod-secrets-99afbb9b-e596-41fe-9ae9-5a4a77c10f30": Phase="Running", Reason="", readiness=true. Elapsed: 4.012050122s
Oct 22 20:19:43.156: INFO: Pod "pod-secrets-99afbb9b-e596-41fe-9ae9-5a4a77c10f30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015097696s
STEP: Saw pod success
Oct 22 20:19:43.156: INFO: Pod "pod-secrets-99afbb9b-e596-41fe-9ae9-5a4a77c10f30" satisfied condition "success or failure"
Oct 22 20:19:43.159: INFO: Trying to get logs from node iruya-worker pod pod-secrets-99afbb9b-e596-41fe-9ae9-5a4a77c10f30 container secret-env-test: 
STEP: delete the pod
Oct 22 20:19:43.181: INFO: Waiting for pod pod-secrets-99afbb9b-e596-41fe-9ae9-5a4a77c10f30 to disappear
Oct 22 20:19:43.192: INFO: Pod pod-secrets-99afbb9b-e596-41fe-9ae9-5a4a77c10f30 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:19:43.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3941" for this suite.
Oct 22 20:19:49.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:19:49.330: INFO: namespace secrets-3941 deletion completed in 6.135257719s

• [SLOW TEST:12.271 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:19:49.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-9b4b5ba7-d660-49fb-bff0-ef624dce0ef7
STEP: Creating configMap with name cm-test-opt-upd-05d6678d-eaa9-4d55-81b1-c377fa0ae6ef
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9b4b5ba7-d660-49fb-bff0-ef624dce0ef7
STEP: Updating configmap cm-test-opt-upd-05d6678d-eaa9-4d55-81b1-c377fa0ae6ef
STEP: Creating configMap with name cm-test-opt-create-7a0d4070-f798-46dc-b8b9-e93295b9fb1a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:21:03.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6304" for this suite.
Oct 22 20:21:25.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:21:26.022: INFO: namespace configmap-6304 deletion completed in 22.086270034s

• [SLOW TEST:96.692 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:21:26.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-2ede51fb-e65f-41ba-a577-ae7a0481dd9e
STEP: Creating a pod to test consume secrets
Oct 22 20:21:26.113: INFO: Waiting up to 5m0s for pod "pod-secrets-ec1ccb72-280a-48c4-8119-3d4f2fa289b7" in namespace "secrets-8234" to be "success or failure"
Oct 22 20:21:26.130: INFO: Pod "pod-secrets-ec1ccb72-280a-48c4-8119-3d4f2fa289b7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.10217ms
Oct 22 20:21:28.176: INFO: Pod "pod-secrets-ec1ccb72-280a-48c4-8119-3d4f2fa289b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063589857s
Oct 22 20:21:30.181: INFO: Pod "pod-secrets-ec1ccb72-280a-48c4-8119-3d4f2fa289b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068042282s
STEP: Saw pod success
Oct 22 20:21:30.181: INFO: Pod "pod-secrets-ec1ccb72-280a-48c4-8119-3d4f2fa289b7" satisfied condition "success or failure"
Oct 22 20:21:30.184: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-ec1ccb72-280a-48c4-8119-3d4f2fa289b7 container secret-volume-test: 
STEP: delete the pod
Oct 22 20:21:30.215: INFO: Waiting for pod pod-secrets-ec1ccb72-280a-48c4-8119-3d4f2fa289b7 to disappear
Oct 22 20:21:30.219: INFO: Pod pod-secrets-ec1ccb72-280a-48c4-8119-3d4f2fa289b7 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:21:30.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8234" for this suite.
Oct 22 20:21:36.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:21:36.333: INFO: namespace secrets-8234 deletion completed in 6.110760853s

• [SLOW TEST:10.310 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:21:36.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Oct 22 20:21:36.396: INFO: Waiting up to 5m0s for pod "pod-db54b1b0-923d-46c5-82c6-803f99049836" in namespace "emptydir-5154" to be "success or failure"
Oct 22 20:21:36.399: INFO: Pod "pod-db54b1b0-923d-46c5-82c6-803f99049836": Phase="Pending", Reason="", readiness=false. Elapsed: 3.402461ms
Oct 22 20:21:38.405: INFO: Pod "pod-db54b1b0-923d-46c5-82c6-803f99049836": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009228264s
Oct 22 20:21:40.409: INFO: Pod "pod-db54b1b0-923d-46c5-82c6-803f99049836": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013529265s
STEP: Saw pod success
Oct 22 20:21:40.409: INFO: Pod "pod-db54b1b0-923d-46c5-82c6-803f99049836" satisfied condition "success or failure"
Oct 22 20:21:40.412: INFO: Trying to get logs from node iruya-worker2 pod pod-db54b1b0-923d-46c5-82c6-803f99049836 container test-container: 
STEP: delete the pod
Oct 22 20:21:40.430: INFO: Waiting for pod pod-db54b1b0-923d-46c5-82c6-803f99049836 to disappear
Oct 22 20:21:40.434: INFO: Pod pod-db54b1b0-923d-46c5-82c6-803f99049836 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:21:40.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5154" for this suite.
Oct 22 20:21:46.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:21:46.526: INFO: namespace emptydir-5154 deletion completed in 6.088135099s

• [SLOW TEST:10.193 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:21:46.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-eb6853dc-2686-41d4-a6df-11f9f718f768
STEP: Creating a pod to test consume secrets
Oct 22 20:21:46.608: INFO: Waiting up to 5m0s for pod "pod-secrets-03a96137-a4ee-4627-9bde-24f1e4848bc8" in namespace "secrets-2087" to be "success or failure"
Oct 22 20:21:46.611: INFO: Pod "pod-secrets-03a96137-a4ee-4627-9bde-24f1e4848bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.118745ms
Oct 22 20:21:48.655: INFO: Pod "pod-secrets-03a96137-a4ee-4627-9bde-24f1e4848bc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047814333s
Oct 22 20:21:50.659: INFO: Pod "pod-secrets-03a96137-a4ee-4627-9bde-24f1e4848bc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051647031s
STEP: Saw pod success
Oct 22 20:21:50.659: INFO: Pod "pod-secrets-03a96137-a4ee-4627-9bde-24f1e4848bc8" satisfied condition "success or failure"
Oct 22 20:21:50.662: INFO: Trying to get logs from node iruya-worker pod pod-secrets-03a96137-a4ee-4627-9bde-24f1e4848bc8 container secret-volume-test: 
STEP: delete the pod
Oct 22 20:21:50.682: INFO: Waiting for pod pod-secrets-03a96137-a4ee-4627-9bde-24f1e4848bc8 to disappear
Oct 22 20:21:50.687: INFO: Pod pod-secrets-03a96137-a4ee-4627-9bde-24f1e4848bc8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:21:50.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2087" for this suite.
Oct 22 20:21:56.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:21:56.807: INFO: namespace secrets-2087 deletion completed in 6.117417006s

• [SLOW TEST:10.281 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:21:56.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Oct 22 20:21:56.893: INFO: Waiting up to 5m0s for pod "pod-81a5a871-07f9-4fc3-bf8d-07d90b648de2" in namespace "emptydir-3011" to be "success or failure"
Oct 22 20:21:56.937: INFO: Pod "pod-81a5a871-07f9-4fc3-bf8d-07d90b648de2": Phase="Pending", Reason="", readiness=false. Elapsed: 44.577278ms
Oct 22 20:21:58.942: INFO: Pod "pod-81a5a871-07f9-4fc3-bf8d-07d90b648de2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049235157s
Oct 22 20:22:00.946: INFO: Pod "pod-81a5a871-07f9-4fc3-bf8d-07d90b648de2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053395789s
STEP: Saw pod success
Oct 22 20:22:00.946: INFO: Pod "pod-81a5a871-07f9-4fc3-bf8d-07d90b648de2" satisfied condition "success or failure"
Oct 22 20:22:00.949: INFO: Trying to get logs from node iruya-worker pod pod-81a5a871-07f9-4fc3-bf8d-07d90b648de2 container test-container: 
STEP: delete the pod
Oct 22 20:22:00.970: INFO: Waiting for pod pod-81a5a871-07f9-4fc3-bf8d-07d90b648de2 to disappear
Oct 22 20:22:00.974: INFO: Pod pod-81a5a871-07f9-4fc3-bf8d-07d90b648de2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:22:00.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3011" for this suite.
Oct 22 20:22:06.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:22:07.070: INFO: namespace emptydir-3011 deletion completed in 6.092754158s

• [SLOW TEST:10.263 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:22:07.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 20:22:07.162: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff35950d-73a0-45c4-b79e-2fe90769265c" in namespace "projected-9233" to be "success or failure"
Oct 22 20:22:07.178: INFO: Pod "downwardapi-volume-ff35950d-73a0-45c4-b79e-2fe90769265c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.784222ms
Oct 22 20:22:09.182: INFO: Pod "downwardapi-volume-ff35950d-73a0-45c4-b79e-2fe90769265c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020262161s
Oct 22 20:22:11.187: INFO: Pod "downwardapi-volume-ff35950d-73a0-45c4-b79e-2fe90769265c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024789656s
STEP: Saw pod success
Oct 22 20:22:11.187: INFO: Pod "downwardapi-volume-ff35950d-73a0-45c4-b79e-2fe90769265c" satisfied condition "success or failure"
Oct 22 20:22:11.189: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ff35950d-73a0-45c4-b79e-2fe90769265c container client-container: 
STEP: delete the pod
Oct 22 20:22:11.223: INFO: Waiting for pod downwardapi-volume-ff35950d-73a0-45c4-b79e-2fe90769265c to disappear
Oct 22 20:22:11.231: INFO: Pod downwardapi-volume-ff35950d-73a0-45c4-b79e-2fe90769265c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:22:11.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9233" for this suite.
Oct 22 20:22:17.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:22:17.321: INFO: namespace projected-9233 deletion completed in 6.08709937s

• [SLOW TEST:10.250 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:22:17.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:22:21.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1946" for this suite.
Oct 22 20:22:27.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:22:27.534: INFO: namespace kubelet-test-1946 deletion completed in 6.096320032s

• [SLOW TEST:10.212 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:22:27.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 20:22:27.612: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8858bdb0-ca5a-4232-bc0c-16a7765b4b3f" in namespace "downward-api-3834" to be "success or failure"
Oct 22 20:22:27.615: INFO: Pod "downwardapi-volume-8858bdb0-ca5a-4232-bc0c-16a7765b4b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.50248ms
Oct 22 20:22:29.619: INFO: Pod "downwardapi-volume-8858bdb0-ca5a-4232-bc0c-16a7765b4b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007628645s
Oct 22 20:22:31.752: INFO: Pod "downwardapi-volume-8858bdb0-ca5a-4232-bc0c-16a7765b4b3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.140381853s
STEP: Saw pod success
Oct 22 20:22:31.752: INFO: Pod "downwardapi-volume-8858bdb0-ca5a-4232-bc0c-16a7765b4b3f" satisfied condition "success or failure"
Oct 22 20:22:31.755: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8858bdb0-ca5a-4232-bc0c-16a7765b4b3f container client-container: 
STEP: delete the pod
Oct 22 20:22:31.891: INFO: Waiting for pod downwardapi-volume-8858bdb0-ca5a-4232-bc0c-16a7765b4b3f to disappear
Oct 22 20:22:31.909: INFO: Pod downwardapi-volume-8858bdb0-ca5a-4232-bc0c-16a7765b4b3f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:22:31.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3834" for this suite.
Oct 22 20:22:37.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:22:38.242: INFO: namespace downward-api-3834 deletion completed in 6.330345264s

• [SLOW TEST:10.708 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:22:38.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Oct 22 20:22:44.400: INFO: DNS probes using dns-4815/dns-test-8f79624d-daf4-4d72-9ab0-44e7150128df succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:22:44.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4815" for this suite.
Oct 22 20:22:50.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:22:50.636: INFO: namespace dns-4815 deletion completed in 6.198977692s

• [SLOW TEST:12.393 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:22:50.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-83c430e7-1032-4f3e-bc7a-4fcc4731f53d
Oct 22 20:22:50.754: INFO: Pod name my-hostname-basic-83c430e7-1032-4f3e-bc7a-4fcc4731f53d: Found 0 pods out of 1
Oct 22 20:22:55.758: INFO: Pod name my-hostname-basic-83c430e7-1032-4f3e-bc7a-4fcc4731f53d: Found 1 pods out of 1
Oct 22 20:22:55.758: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-83c430e7-1032-4f3e-bc7a-4fcc4731f53d" are running
Oct 22 20:22:55.762: INFO: Pod "my-hostname-basic-83c430e7-1032-4f3e-bc7a-4fcc4731f53d-f9rc2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-22 20:22:50 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-22 20:22:53 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-22 20:22:53 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-22 20:22:50 +0000 UTC Reason: Message:}])
Oct 22 20:22:55.762: INFO: Trying to dial the pod
Oct 22 20:23:00.780: INFO: Controller my-hostname-basic-83c430e7-1032-4f3e-bc7a-4fcc4731f53d: Got expected result from replica 1 [my-hostname-basic-83c430e7-1032-4f3e-bc7a-4fcc4731f53d-f9rc2]: "my-hostname-basic-83c430e7-1032-4f3e-bc7a-4fcc4731f53d-f9rc2", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:23:00.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2751" for this suite.
Oct 22 20:23:06.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:23:06.907: INFO: namespace replication-controller-2751 deletion completed in 6.122600968s

• [SLOW TEST:16.271 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:23:06.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:23:11.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3080" for this suite.
Oct 22 20:23:17.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:23:17.229: INFO: namespace emptydir-wrapper-3080 deletion completed in 6.091261361s

• [SLOW TEST:10.322 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:23:17.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b42c129d-51e6-412d-8ab3-3718b3c93473
STEP: Creating a pod to test consume secrets
Oct 22 20:23:17.309: INFO: Waiting up to 5m0s for pod "pod-secrets-6176c4f2-da13-4461-8591-aac8a0e9c787" in namespace "secrets-3371" to be "success or failure"
Oct 22 20:23:17.317: INFO: Pod "pod-secrets-6176c4f2-da13-4461-8591-aac8a0e9c787": Phase="Pending", Reason="", readiness=false. Elapsed: 7.972884ms
Oct 22 20:23:19.320: INFO: Pod "pod-secrets-6176c4f2-da13-4461-8591-aac8a0e9c787": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011660571s
Oct 22 20:23:21.324: INFO: Pod "pod-secrets-6176c4f2-da13-4461-8591-aac8a0e9c787": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015417743s
STEP: Saw pod success
Oct 22 20:23:21.324: INFO: Pod "pod-secrets-6176c4f2-da13-4461-8591-aac8a0e9c787" satisfied condition "success or failure"
Oct 22 20:23:21.327: INFO: Trying to get logs from node iruya-worker pod pod-secrets-6176c4f2-da13-4461-8591-aac8a0e9c787 container secret-volume-test: 
STEP: delete the pod
Oct 22 20:23:21.348: INFO: Waiting for pod pod-secrets-6176c4f2-da13-4461-8591-aac8a0e9c787 to disappear
Oct 22 20:23:21.353: INFO: Pod pod-secrets-6176c4f2-da13-4461-8591-aac8a0e9c787 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:23:21.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3371" for this suite.
Oct 22 20:23:27.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:23:27.436: INFO: namespace secrets-3371 deletion completed in 6.080550711s

• [SLOW TEST:10.207 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:23:27.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Oct 22 20:23:27.512: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85e863ea-4b4c-44c8-9d56-49afc194f227" in namespace "projected-7168" to be "success or failure"
Oct 22 20:23:27.521: INFO: Pod "downwardapi-volume-85e863ea-4b4c-44c8-9d56-49afc194f227": Phase="Pending", Reason="", readiness=false. Elapsed: 9.139756ms
Oct 22 20:23:29.526: INFO: Pod "downwardapi-volume-85e863ea-4b4c-44c8-9d56-49afc194f227": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013327804s
Oct 22 20:23:31.529: INFO: Pod "downwardapi-volume-85e863ea-4b4c-44c8-9d56-49afc194f227": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016878585s
STEP: Saw pod success
Oct 22 20:23:31.529: INFO: Pod "downwardapi-volume-85e863ea-4b4c-44c8-9d56-49afc194f227" satisfied condition "success or failure"
Oct 22 20:23:31.531: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-85e863ea-4b4c-44c8-9d56-49afc194f227 container client-container: 
STEP: delete the pod
Oct 22 20:23:31.564: INFO: Waiting for pod downwardapi-volume-85e863ea-4b4c-44c8-9d56-49afc194f227 to disappear
Oct 22 20:23:31.587: INFO: Pod downwardapi-volume-85e863ea-4b4c-44c8-9d56-49afc194f227 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:23:31.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7168" for this suite.
Oct 22 20:23:37.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:23:37.689: INFO: namespace projected-7168 deletion completed in 6.098393722s

• [SLOW TEST:10.253 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:23:37.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-bf579abf-656f-4abf-9af0-3548f91bae7a
STEP: Creating a pod to test consume secrets
Oct 22 20:23:37.757: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee0582eb-ceb7-4a2d-9478-8aa8290633cf" in namespace "projected-8870" to be "success or failure"
Oct 22 20:23:37.818: INFO: Pod "pod-projected-secrets-ee0582eb-ceb7-4a2d-9478-8aa8290633cf": Phase="Pending", Reason="", readiness=false. Elapsed: 61.211698ms
Oct 22 20:23:39.823: INFO: Pod "pod-projected-secrets-ee0582eb-ceb7-4a2d-9478-8aa8290633cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065494029s
Oct 22 20:23:41.826: INFO: Pod "pod-projected-secrets-ee0582eb-ceb7-4a2d-9478-8aa8290633cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069188518s
STEP: Saw pod success
Oct 22 20:23:41.826: INFO: Pod "pod-projected-secrets-ee0582eb-ceb7-4a2d-9478-8aa8290633cf" satisfied condition "success or failure"
Oct 22 20:23:41.829: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-ee0582eb-ceb7-4a2d-9478-8aa8290633cf container projected-secret-volume-test: 
STEP: delete the pod
Oct 22 20:23:42.049: INFO: Waiting for pod pod-projected-secrets-ee0582eb-ceb7-4a2d-9478-8aa8290633cf to disappear
Oct 22 20:23:42.070: INFO: Pod pod-projected-secrets-ee0582eb-ceb7-4a2d-9478-8aa8290633cf no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:23:42.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8870" for this suite.
Oct 22 20:23:48.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:23:48.168: INFO: namespace projected-8870 deletion completed in 6.094109798s

• [SLOW TEST:10.479 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:23:48.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Oct 22 20:23:48.230: INFO: Waiting up to 5m0s for pod "pod-f01b59d8-b45d-4df4-9e2f-c15e151ff805" in namespace "emptydir-2904" to be "success or failure"
Oct 22 20:23:48.237: INFO: Pod "pod-f01b59d8-b45d-4df4-9e2f-c15e151ff805": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394017ms
Oct 22 20:23:50.240: INFO: Pod "pod-f01b59d8-b45d-4df4-9e2f-c15e151ff805": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010130142s
Oct 22 20:23:52.244: INFO: Pod "pod-f01b59d8-b45d-4df4-9e2f-c15e151ff805": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014042437s
STEP: Saw pod success
Oct 22 20:23:52.244: INFO: Pod "pod-f01b59d8-b45d-4df4-9e2f-c15e151ff805" satisfied condition "success or failure"
Oct 22 20:23:52.247: INFO: Trying to get logs from node iruya-worker pod pod-f01b59d8-b45d-4df4-9e2f-c15e151ff805 container test-container: 
STEP: delete the pod
Oct 22 20:23:52.265: INFO: Waiting for pod pod-f01b59d8-b45d-4df4-9e2f-c15e151ff805 to disappear
Oct 22 20:23:52.270: INFO: Pod pod-f01b59d8-b45d-4df4-9e2f-c15e151ff805 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:23:52.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2904" for this suite.
Oct 22 20:23:58.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:23:58.367: INFO: namespace emptydir-2904 deletion completed in 6.093779936s

• [SLOW TEST:10.199 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:23:58.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-f7723cd4-9643-47b2-9a0d-14b991ae6fe0
STEP: Creating secret with name s-test-opt-upd-c596cc8f-6b8d-4db1-af9a-93624c4419df
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f7723cd4-9643-47b2-9a0d-14b991ae6fe0
STEP: Updating secret s-test-opt-upd-c596cc8f-6b8d-4db1-af9a-93624c4419df
STEP: Creating secret with name s-test-opt-create-a4bc4dc0-d475-49c5-9ba1-fce03e8f5e1f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:25:26.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9132" for this suite.
Oct 22 20:25:48.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:25:48.831: INFO: namespace secrets-9132 deletion completed in 22.123549204s

• [SLOW TEST:110.464 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:25:48.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:25:55.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-162" for this suite.
Oct 22 20:26:03.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:26:03.259: INFO: namespace namespaces-162 deletion completed in 8.09151344s
STEP: Destroying namespace "nsdeletetest-9372" for this suite.
Oct 22 20:26:03.261: INFO: Namespace nsdeletetest-9372 was already deleted
STEP: Destroying namespace "nsdeletetest-4405" for this suite.
Oct 22 20:26:09.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:26:09.342: INFO: namespace nsdeletetest-4405 deletion completed in 6.081617239s

• [SLOW TEST:20.511 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:26:09.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 20:26:09.388: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Oct 22 20:26:09.410: INFO: Pod name sample-pod: Found 0 pods out of 1
Oct 22 20:26:14.415: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Oct 22 20:26:14.415: INFO: Creating deployment "test-rolling-update-deployment"
Oct 22 20:26:14.420: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Oct 22 20:26:14.445: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Oct 22 20:26:16.452: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Oct 22 20:26:16.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738995174, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738995174, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738995174, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738995174, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Oct 22 20:26:18.458: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Oct 22 20:26:18.468: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-3798,SelfLink:/apis/apps/v1/namespaces/deployment-3798/deployments/test-rolling-update-deployment,UID:215d3a97-d0f8-4480-9720-8959b48f1108,ResourceVersion:5325799,Generation:1,CreationTimestamp:2020-10-22 20:26:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-10-22 20:26:14 +0000 UTC 2020-10-22 20:26:14 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-10-22 20:26:17 +0000 UTC 2020-10-22 20:26:14 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Oct 22 20:26:18.471: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-3798,SelfLink:/apis/apps/v1/namespaces/deployment-3798/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:e4aab9ab-9128-4a2d-aed0-c911d22c92d1,ResourceVersion:5325788,Generation:1,CreationTimestamp:2020-10-22 20:26:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 215d3a97-d0f8-4480-9720-8959b48f1108 0xc002416037 0xc002416038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Oct 22 20:26:18.471: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Oct 22 20:26:18.471: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-3798,SelfLink:/apis/apps/v1/namespaces/deployment-3798/replicasets/test-rolling-update-controller,UID:545d3e67-7992-4faa-93b2-a6c85204c4ce,ResourceVersion:5325797,Generation:2,CreationTimestamp:2020-10-22 20:26:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 215d3a97-d0f8-4480-9720-8959b48f1108 0xc000433827 0xc000433828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Oct 22 20:26:18.475: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-2fsxt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-2fsxt,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-3798,SelfLink:/api/v1/namespaces/deployment-3798/pods/test-rolling-update-deployment-79f6b9d75c-2fsxt,UID:5fa786cd-7a52-418b-a435-61e9c8fb5ccc,ResourceVersion:5325787,Generation:0,CreationTimestamp:2020-10-22 20:26:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c e4aab9ab-9128-4a2d-aed0-c911d22c92d1 0xc000530387 0xc000530388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bn2nl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bn2nl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-bn2nl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0005304e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000530500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:26:14 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:26:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:26:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-22 20:26:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.86,StartTime:2020-10-22 20:26:14 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-10-22 20:26:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://846531ab59c186eb56aa51995966f4aa58b50069610e4037cd78c8d8c0518c2a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:26:18.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3798" for this suite.
Oct 22 20:26:26.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:26:26.685: INFO: namespace deployment-3798 deletion completed in 8.206665049s

• [SLOW TEST:17.342 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:26:26.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Oct 22 20:26:26.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7404'
Oct 22 20:26:29.521: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Oct 22 20:26:29.521: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Oct 22 20:26:31.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7404'
Oct 22 20:26:31.790: INFO: stderr: ""
Oct 22 20:26:31.791: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:26:31.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7404" for this suite.
Oct 22 20:26:37.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:26:37.946: INFO: namespace kubectl-7404 deletion completed in 6.114595518s

• [SLOW TEST:11.260 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:26:37.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Oct 22 20:26:38.080: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7755,SelfLink:/api/v1/namespaces/watch-7755/configmaps/e2e-watch-test-watch-closed,UID:3dbe0fbd-5cd3-41d7-9c70-bbdbe466e6e4,ResourceVersion:5325902,Generation:0,CreationTimestamp:2020-10-22 20:26:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Oct 22 20:26:38.080: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7755,SelfLink:/api/v1/namespaces/watch-7755/configmaps/e2e-watch-test-watch-closed,UID:3dbe0fbd-5cd3-41d7-9c70-bbdbe466e6e4,ResourceVersion:5325903,Generation:0,CreationTimestamp:2020-10-22 20:26:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Oct 22 20:26:38.094: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7755,SelfLink:/api/v1/namespaces/watch-7755/configmaps/e2e-watch-test-watch-closed,UID:3dbe0fbd-5cd3-41d7-9c70-bbdbe466e6e4,ResourceVersion:5325904,Generation:0,CreationTimestamp:2020-10-22 20:26:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Oct 22 20:26:38.094: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7755,SelfLink:/api/v1/namespaces/watch-7755/configmaps/e2e-watch-test-watch-closed,UID:3dbe0fbd-5cd3-41d7-9c70-bbdbe466e6e4,ResourceVersion:5325905,Generation:0,CreationTimestamp:2020-10-22 20:26:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:26:38.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7755" for this suite.
Oct 22 20:26:44.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:26:44.174: INFO: namespace watch-7755 deletion completed in 6.074317338s

• [SLOW TEST:6.227 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:26:44.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Oct 22 20:26:44.808: INFO: Pod name wrapped-volume-race-723f4142-c06f-450c-b112-3655357a4902: Found 0 pods out of 5
Oct 22 20:26:49.817: INFO: Pod name wrapped-volume-race-723f4142-c06f-450c-b112-3655357a4902: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-723f4142-c06f-450c-b112-3655357a4902 in namespace emptydir-wrapper-7926, will wait for the garbage collector to delete the pods
Oct 22 20:27:03.928: INFO: Deleting ReplicationController wrapped-volume-race-723f4142-c06f-450c-b112-3655357a4902 took: 7.588585ms
Oct 22 20:27:05.328: INFO: Terminating ReplicationController wrapped-volume-race-723f4142-c06f-450c-b112-3655357a4902 pods took: 1.400313467s
STEP: Creating RC which spawns configmap-volume pods
Oct 22 20:27:46.470: INFO: Pod name wrapped-volume-race-e280a597-52f3-471d-9602-a167b3a27a8e: Found 0 pods out of 5
Oct 22 20:27:51.482: INFO: Pod name wrapped-volume-race-e280a597-52f3-471d-9602-a167b3a27a8e: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e280a597-52f3-471d-9602-a167b3a27a8e in namespace emptydir-wrapper-7926, will wait for the garbage collector to delete the pods
Oct 22 20:28:05.565: INFO: Deleting ReplicationController wrapped-volume-race-e280a597-52f3-471d-9602-a167b3a27a8e took: 6.27085ms
Oct 22 20:28:05.965: INFO: Terminating ReplicationController wrapped-volume-race-e280a597-52f3-471d-9602-a167b3a27a8e pods took: 400.191631ms
STEP: Creating RC which spawns configmap-volume pods
Oct 22 20:28:46.601: INFO: Pod name wrapped-volume-race-2720fc6f-8ff8-4341-8e07-98579fddc2fd: Found 0 pods out of 5
Oct 22 20:28:51.609: INFO: Pod name wrapped-volume-race-2720fc6f-8ff8-4341-8e07-98579fddc2fd: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2720fc6f-8ff8-4341-8e07-98579fddc2fd in namespace emptydir-wrapper-7926, will wait for the garbage collector to delete the pods
Oct 22 20:29:07.030: INFO: Deleting ReplicationController wrapped-volume-race-2720fc6f-8ff8-4341-8e07-98579fddc2fd took: 29.766892ms
Oct 22 20:29:07.331: INFO: Terminating ReplicationController wrapped-volume-race-2720fc6f-8ff8-4341-8e07-98579fddc2fd pods took: 300.254384ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:29:48.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7926" for this suite.
Oct 22 20:29:56.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:29:56.205: INFO: namespace emptydir-wrapper-7926 deletion completed in 8.080266836s

• [SLOW TEST:192.031 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:29:56.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Oct 22 20:29:56.279: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Oct 22 20:29:56.287: INFO: Waiting for terminating namespaces to be deleted...
Oct 22 20:29:56.289: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Oct 22 20:29:56.295: INFO: kindnet-7bsvw from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Oct 22 20:29:56.295: INFO: 	Container kindnet-cni ready: true, restart count 0
Oct 22 20:29:56.295: INFO: kube-proxy-mtljr from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Oct 22 20:29:56.295: INFO: 	Container kube-proxy ready: true, restart count 0
Oct 22 20:29:56.295: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Oct 22 20:29:56.300: INFO: kindnet-djqgh from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Oct 22 20:29:56.300: INFO: 	Container kindnet-cni ready: true, restart count 0
Oct 22 20:29:56.300: INFO: kube-proxy-52wt5 from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Oct 22 20:29:56.300: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8b87b4df-743f-4d65-933a-f243f23670a5 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-8b87b4df-743f-4d65-933a-f243f23670a5 off the node iruya-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8b87b4df-743f-4d65-933a-f243f23670a5
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:30:04.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4297" for this suite.
Oct 22 20:30:16.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:30:16.882: INFO: namespace sched-pred-4297 deletion completed in 12.108912311s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:20.677 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:30:16.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6652
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Oct 22 20:30:16.980: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Oct 22 20:30:39.123: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.87:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6652 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 20:30:39.123: INFO: >>> kubeConfig: /root/.kube/config
I1022 20:30:39.166872       6 log.go:172] (0xc001151810) (0xc00173f7c0) Create stream
I1022 20:30:39.166907       6 log.go:172] (0xc001151810) (0xc00173f7c0) Stream added, broadcasting: 1
I1022 20:30:39.168505       6 log.go:172] (0xc001151810) Reply frame received for 1
I1022 20:30:39.168534       6 log.go:172] (0xc001151810) (0xc00345a140) Create stream
I1022 20:30:39.168546       6 log.go:172] (0xc001151810) (0xc00345a140) Stream added, broadcasting: 3
I1022 20:30:39.169280       6 log.go:172] (0xc001151810) Reply frame received for 3
I1022 20:30:39.169319       6 log.go:172] (0xc001151810) (0xc001a0dcc0) Create stream
I1022 20:30:39.169329       6 log.go:172] (0xc001151810) (0xc001a0dcc0) Stream added, broadcasting: 5
I1022 20:30:39.169896       6 log.go:172] (0xc001151810) Reply frame received for 5
I1022 20:30:39.269182       6 log.go:172] (0xc001151810) Data frame received for 3
I1022 20:30:39.269217       6 log.go:172] (0xc001151810) Data frame received for 5
I1022 20:30:39.269238       6 log.go:172] (0xc001a0dcc0) (5) Data frame handling
I1022 20:30:39.269301       6 log.go:172] (0xc00345a140) (3) Data frame handling
I1022 20:30:39.269361       6 log.go:172] (0xc00345a140) (3) Data frame sent
I1022 20:30:39.269385       6 log.go:172] (0xc001151810) Data frame received for 3
I1022 20:30:39.269398       6 log.go:172] (0xc00345a140) (3) Data frame handling
I1022 20:30:39.271317       6 log.go:172] (0xc001151810) Data frame received for 1
I1022 20:30:39.271346       6 log.go:172] (0xc00173f7c0) (1) Data frame handling
I1022 20:30:39.271366       6 log.go:172] (0xc00173f7c0) (1) Data frame sent
I1022 20:30:39.271384       6 log.go:172] (0xc001151810) (0xc00173f7c0) Stream removed, broadcasting: 1
I1022 20:30:39.271488       6 log.go:172] (0xc001151810) (0xc00173f7c0) Stream removed, broadcasting: 1
I1022 20:30:39.271508       6 log.go:172] (0xc001151810) (0xc00345a140) Stream removed, broadcasting: 3
I1022 20:30:39.271530       6 log.go:172] (0xc001151810) (0xc001a0dcc0) Stream removed, broadcasting: 5
I1022 20:30:39.271570       6 log.go:172] (0xc001151810) Go away received
Oct 22 20:30:39.271: INFO: Found all expected endpoints: [netserver-0]
Oct 22 20:30:39.284: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.73:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6652 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Oct 22 20:30:39.284: INFO: >>> kubeConfig: /root/.kube/config
I1022 20:30:39.319792       6 log.go:172] (0xc000f504d0) (0xc002276dc0) Create stream
I1022 20:30:39.319824       6 log.go:172] (0xc000f504d0) (0xc002276dc0) Stream added, broadcasting: 1
I1022 20:30:39.332338       6 log.go:172] (0xc000f504d0) Reply frame received for 1
I1022 20:30:39.332389       6 log.go:172] (0xc000f504d0) (0xc00173f860) Create stream
I1022 20:30:39.332403       6 log.go:172] (0xc000f504d0) (0xc00173f860) Stream added, broadcasting: 3
I1022 20:30:39.333286       6 log.go:172] (0xc000f504d0) Reply frame received for 3
I1022 20:30:39.333318       6 log.go:172] (0xc000f504d0) (0xc00173f900) Create stream
I1022 20:30:39.333326       6 log.go:172] (0xc000f504d0) (0xc00173f900) Stream added, broadcasting: 5
I1022 20:30:39.333943       6 log.go:172] (0xc000f504d0) Reply frame received for 5
I1022 20:30:39.384929       6 log.go:172] (0xc000f504d0) Data frame received for 3
I1022 20:30:39.384974       6 log.go:172] (0xc000f504d0) Data frame received for 5
I1022 20:30:39.385012       6 log.go:172] (0xc00173f900) (5) Data frame handling
I1022 20:30:39.385053       6 log.go:172] (0xc00173f860) (3) Data frame handling
I1022 20:30:39.385078       6 log.go:172] (0xc00173f860) (3) Data frame sent
I1022 20:30:39.385102       6 log.go:172] (0xc000f504d0) Data frame received for 3
I1022 20:30:39.385122       6 log.go:172] (0xc00173f860) (3) Data frame handling
I1022 20:30:39.386942       6 log.go:172] (0xc000f504d0) Data frame received for 1
I1022 20:30:39.387036       6 log.go:172] (0xc002276dc0) (1) Data frame handling
I1022 20:30:39.387089       6 log.go:172] (0xc002276dc0) (1) Data frame sent
I1022 20:30:39.387119       6 log.go:172] (0xc000f504d0) (0xc002276dc0) Stream removed, broadcasting: 1
I1022 20:30:39.387152       6 log.go:172] (0xc000f504d0) Go away received
I1022 20:30:39.387276       6 log.go:172] (0xc000f504d0) (0xc002276dc0) Stream removed, broadcasting: 1
I1022 20:30:39.387325       6 log.go:172] (0xc000f504d0) (0xc00173f860) Stream removed, broadcasting: 3
I1022 20:30:39.387354       6 log.go:172] (0xc000f504d0) (0xc00173f900) Stream removed, broadcasting: 5
Oct 22 20:30:39.387: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:30:39.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6652" for this suite.
Oct 22 20:31:03.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:31:03.492: INFO: namespace pod-network-test-6652 deletion completed in 24.10052317s

• [SLOW TEST:46.610 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:31:03.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8045
I1022 20:31:03.601600       6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8045, replica count: 1
I1022 20:31:04.652137       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1022 20:31:05.652371       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1022 20:31:06.652657       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Oct 22 20:31:06.785: INFO: Created: latency-svc-4bdcx
Oct 22 20:31:06.794: INFO: Got endpoints: latency-svc-4bdcx [41.772097ms]
Oct 22 20:31:06.829: INFO: Created: latency-svc-rwfrh
Oct 22 20:31:06.901: INFO: Got endpoints: latency-svc-rwfrh [106.183343ms]
Oct 22 20:31:06.903: INFO: Created: latency-svc-4xd6q
Oct 22 20:31:06.917: INFO: Got endpoints: latency-svc-4xd6q [122.268275ms]
Oct 22 20:31:06.998: INFO: Created: latency-svc-b5lwg
Oct 22 20:31:07.068: INFO: Got endpoints: latency-svc-b5lwg [273.731841ms]
Oct 22 20:31:07.106: INFO: Created: latency-svc-pm2lw
Oct 22 20:31:07.137: INFO: Got endpoints: latency-svc-pm2lw [342.576604ms]
Oct 22 20:31:07.212: INFO: Created: latency-svc-4rx9p
Oct 22 20:31:07.220: INFO: Got endpoints: latency-svc-4rx9p [425.671641ms]
Oct 22 20:31:07.243: INFO: Created: latency-svc-48s6p
Oct 22 20:31:07.259: INFO: Got endpoints: latency-svc-48s6p [464.634712ms]
Oct 22 20:31:07.286: INFO: Created: latency-svc-5tvmf
Oct 22 20:31:07.304: INFO: Got endpoints: latency-svc-5tvmf [508.856505ms]
Oct 22 20:31:07.362: INFO: Created: latency-svc-ncpzs
Oct 22 20:31:07.368: INFO: Got endpoints: latency-svc-ncpzs [573.043187ms]
Oct 22 20:31:07.390: INFO: Created: latency-svc-gpwd5
Oct 22 20:31:07.405: INFO: Got endpoints: latency-svc-gpwd5 [609.934649ms]
Oct 22 20:31:07.443: INFO: Created: latency-svc-vf9pg
Oct 22 20:31:07.453: INFO: Got endpoints: latency-svc-vf9pg [658.274731ms]
Oct 22 20:31:07.496: INFO: Created: latency-svc-tz6gj
Oct 22 20:31:07.531: INFO: Got endpoints: latency-svc-tz6gj [736.637134ms]
Oct 22 20:31:07.570: INFO: Created: latency-svc-bkbgs
Oct 22 20:31:07.586: INFO: Got endpoints: latency-svc-bkbgs [790.963265ms]
Oct 22 20:31:07.638: INFO: Created: latency-svc-8pc9n
Oct 22 20:31:07.641: INFO: Got endpoints: latency-svc-8pc9n [845.882973ms]
Oct 22 20:31:07.675: INFO: Created: latency-svc-xbzpz
Oct 22 20:31:07.699: INFO: Got endpoints: latency-svc-xbzpz [904.5998ms]
Oct 22 20:31:07.730: INFO: Created: latency-svc-9xxg8
Oct 22 20:31:07.781: INFO: Got endpoints: latency-svc-9xxg8 [985.933955ms]
Oct 22 20:31:07.801: INFO: Created: latency-svc-2b2qc
Oct 22 20:31:07.815: INFO: Got endpoints: latency-svc-2b2qc [914.72158ms]
Oct 22 20:31:07.840: INFO: Created: latency-svc-g8ndv
Oct 22 20:31:07.856: INFO: Got endpoints: latency-svc-g8ndv [939.524167ms]
Oct 22 20:31:07.874: INFO: Created: latency-svc-9tzr2
Oct 22 20:31:07.937: INFO: Got endpoints: latency-svc-9tzr2 [868.578303ms]
Oct 22 20:31:07.958: INFO: Created: latency-svc-mq7ss
Oct 22 20:31:07.971: INFO: Got endpoints: latency-svc-mq7ss [833.347195ms]
Oct 22 20:31:08.002: INFO: Created: latency-svc-85h8h
Oct 22 20:31:08.020: INFO: Got endpoints: latency-svc-85h8h [799.768789ms]
Oct 22 20:31:08.081: INFO: Created: latency-svc-ltrwr
Oct 22 20:31:08.114: INFO: Got endpoints: latency-svc-ltrwr [854.312966ms]
Oct 22 20:31:08.115: INFO: Created: latency-svc-rmt5t
Oct 22 20:31:08.167: INFO: Got endpoints: latency-svc-rmt5t [863.599574ms]
Oct 22 20:31:08.230: INFO: Created: latency-svc-m2cxq
Oct 22 20:31:08.236: INFO: Got endpoints: latency-svc-m2cxq [868.097533ms]
Oct 22 20:31:08.260: INFO: Created: latency-svc-6z54l
Oct 22 20:31:08.272: INFO: Got endpoints: latency-svc-6z54l [867.269975ms]
Oct 22 20:31:08.296: INFO: Created: latency-svc-pqp8n
Oct 22 20:31:08.330: INFO: Got endpoints: latency-svc-pqp8n [876.521948ms]
Oct 22 20:31:08.422: INFO: Created: latency-svc-28np5
Oct 22 20:31:08.429: INFO: Got endpoints: latency-svc-28np5 [897.204041ms]
Oct 22 20:31:08.470: INFO: Created: latency-svc-x5xqs
Oct 22 20:31:08.483: INFO: Got endpoints: latency-svc-x5xqs [897.279122ms]
Oct 22 20:31:08.503: INFO: Created: latency-svc-vx9dg
Oct 22 20:31:08.520: INFO: Got endpoints: latency-svc-vx9dg [879.418257ms]
Oct 22 20:31:08.571: INFO: Created: latency-svc-dbb22
Oct 22 20:31:08.575: INFO: Got endpoints: latency-svc-dbb22 [875.102562ms]
Oct 22 20:31:08.618: INFO: Created: latency-svc-d9ppv
Oct 22 20:31:08.634: INFO: Got endpoints: latency-svc-d9ppv [853.153257ms]
Oct 22 20:31:08.710: INFO: Created: latency-svc-2vk9t
Oct 22 20:31:08.718: INFO: Got endpoints: latency-svc-2vk9t [902.676401ms]
Oct 22 20:31:08.768: INFO: Created: latency-svc-rgh8h
Oct 22 20:31:08.785: INFO: Got endpoints: latency-svc-rgh8h [929.045928ms]
Oct 22 20:31:08.853: INFO: Created: latency-svc-ps4v6
Oct 22 20:31:08.856: INFO: Got endpoints: latency-svc-ps4v6 [919.617125ms]
Oct 22 20:31:08.902: INFO: Created: latency-svc-cpnw4
Oct 22 20:31:08.923: INFO: Got endpoints: latency-svc-cpnw4 [951.989983ms]
Oct 22 20:31:08.953: INFO: Created: latency-svc-klhbx
Oct 22 20:31:08.986: INFO: Got endpoints: latency-svc-klhbx [965.690964ms]
Oct 22 20:31:09.002: INFO: Created: latency-svc-7scks
Oct 22 20:31:09.019: INFO: Got endpoints: latency-svc-7scks [905.19308ms]
Oct 22 20:31:09.050: INFO: Created: latency-svc-fqfjz
Oct 22 20:31:09.061: INFO: Got endpoints: latency-svc-fqfjz [893.979447ms]
Oct 22 20:31:09.123: INFO: Created: latency-svc-78tnw
Oct 22 20:31:09.126: INFO: Got endpoints: latency-svc-78tnw [890.549756ms]
Oct 22 20:31:09.154: INFO: Created: latency-svc-8w6ql
Oct 22 20:31:09.172: INFO: Got endpoints: latency-svc-8w6ql [900.495734ms]
Oct 22 20:31:09.190: INFO: Created: latency-svc-xq92c
Oct 22 20:31:09.206: INFO: Got endpoints: latency-svc-xq92c [876.568867ms]
Oct 22 20:31:09.266: INFO: Created: latency-svc-v48vg
Oct 22 20:31:09.272: INFO: Got endpoints: latency-svc-v48vg [843.395726ms]
Oct 22 20:31:09.301: INFO: Created: latency-svc-j2cb5
Oct 22 20:31:09.315: INFO: Got endpoints: latency-svc-j2cb5 [832.018736ms]
Oct 22 20:31:09.337: INFO: Created: latency-svc-9qgh5
Oct 22 20:31:09.345: INFO: Got endpoints: latency-svc-9qgh5 [825.367904ms]
Oct 22 20:31:09.404: INFO: Created: latency-svc-25zp5
Oct 22 20:31:09.408: INFO: Got endpoints: latency-svc-25zp5 [833.11261ms]
Oct 22 20:31:09.442: INFO: Created: latency-svc-hd2z6
Oct 22 20:31:09.454: INFO: Got endpoints: latency-svc-hd2z6 [820.365135ms]
Oct 22 20:31:09.481: INFO: Created: latency-svc-jmpwl
Oct 22 20:31:09.496: INFO: Got endpoints: latency-svc-jmpwl [777.933058ms]
Oct 22 20:31:09.566: INFO: Created: latency-svc-pnf29
Oct 22 20:31:09.577: INFO: Got endpoints: latency-svc-pnf29 [791.06706ms]
Oct 22 20:31:09.601: INFO: Created: latency-svc-9mfpf
Oct 22 20:31:09.619: INFO: Got endpoints: latency-svc-9mfpf [762.538899ms]
Oct 22 20:31:09.639: INFO: Created: latency-svc-dgmwb
Oct 22 20:31:09.649: INFO: Got endpoints: latency-svc-dgmwb [726.690265ms]
Oct 22 20:31:09.710: INFO: Created: latency-svc-rbjkm
Oct 22 20:31:09.721: INFO: Got endpoints: latency-svc-rbjkm [735.027308ms]
Oct 22 20:31:09.751: INFO: Created: latency-svc-7crs5
Oct 22 20:31:09.763: INFO: Got endpoints: latency-svc-7crs5 [744.616985ms]
Oct 22 20:31:09.781: INFO: Created: latency-svc-wfln4
Oct 22 20:31:09.794: INFO: Got endpoints: latency-svc-wfln4 [732.943773ms]
Oct 22 20:31:09.847: INFO: Created: latency-svc-kdphc
Oct 22 20:31:09.850: INFO: Got endpoints: latency-svc-kdphc [723.738423ms]
Oct 22 20:31:10.720: INFO: Created: latency-svc-mpvfs
Oct 22 20:31:10.747: INFO: Got endpoints: latency-svc-mpvfs [1.574779535s]
Oct 22 20:31:10.794: INFO: Created: latency-svc-576hh
Oct 22 20:31:10.807: INFO: Got endpoints: latency-svc-576hh [1.600506645s]
Oct 22 20:31:10.877: INFO: Created: latency-svc-fklp2
Oct 22 20:31:10.879: INFO: Got endpoints: latency-svc-fklp2 [1.606694739s]
Oct 22 20:31:11.033: INFO: Created: latency-svc-98jwk
Oct 22 20:31:11.037: INFO: Got endpoints: latency-svc-98jwk [1.721333399s]
Oct 22 20:31:11.090: INFO: Created: latency-svc-m5dd8
Oct 22 20:31:11.114: INFO: Got endpoints: latency-svc-m5dd8 [1.768281538s]
Oct 22 20:31:11.187: INFO: Created: latency-svc-cfsh7
Oct 22 20:31:11.213: INFO: Got endpoints: latency-svc-cfsh7 [1.805700098s]
Oct 22 20:31:11.214: INFO: Created: latency-svc-6rd92
Oct 22 20:31:11.234: INFO: Got endpoints: latency-svc-6rd92 [1.779285841s]
Oct 22 20:31:11.274: INFO: Created: latency-svc-kr7n6
Oct 22 20:31:11.314: INFO: Got endpoints: latency-svc-kr7n6 [1.818145146s]
Oct 22 20:31:11.354: INFO: Created: latency-svc-vz497
Oct 22 20:31:11.379: INFO: Got endpoints: latency-svc-vz497 [1.801893246s]
Oct 22 20:31:11.402: INFO: Created: latency-svc-h9sm5
Oct 22 20:31:11.439: INFO: Got endpoints: latency-svc-h9sm5 [1.820286401s]
Oct 22 20:31:11.454: INFO: Created: latency-svc-dnbjl
Oct 22 20:31:11.469: INFO: Got endpoints: latency-svc-dnbjl [1.819526458s]
Oct 22 20:31:11.502: INFO: Created: latency-svc-7k8xk
Oct 22 20:31:11.517: INFO: Got endpoints: latency-svc-7k8xk [1.796328541s]
Oct 22 20:31:11.596: INFO: Created: latency-svc-gxbkq
Oct 22 20:31:11.625: INFO: Got endpoints: latency-svc-gxbkq [1.86117903s]
Oct 22 20:31:11.626: INFO: Created: latency-svc-gkwlm
Oct 22 20:31:11.638: INFO: Got endpoints: latency-svc-gkwlm [1.843433206s]
Oct 22 20:31:11.663: INFO: Created: latency-svc-5rxtl
Oct 22 20:31:11.680: INFO: Got endpoints: latency-svc-5rxtl [1.829614464s]
Oct 22 20:31:11.739: INFO: Created: latency-svc-bqzwf
Oct 22 20:31:11.772: INFO: Got endpoints: latency-svc-bqzwf [1.024262737s]
Oct 22 20:31:11.775: INFO: Created: latency-svc-cbmv2
Oct 22 20:31:11.794: INFO: Got endpoints: latency-svc-cbmv2 [987.596406ms]
Oct 22 20:31:11.816: INFO: Created: latency-svc-5slcj
Oct 22 20:31:11.831: INFO: Got endpoints: latency-svc-5slcj [951.943706ms]
Oct 22 20:31:11.889: INFO: Created: latency-svc-57g2p
Oct 22 20:31:11.951: INFO: Got endpoints: latency-svc-57g2p [914.850585ms]
Oct 22 20:31:11.953: INFO: Created: latency-svc-tsgvc
Oct 22 20:31:11.969: INFO: Got endpoints: latency-svc-tsgvc [855.216791ms]
Oct 22 20:31:11.988: INFO: Created: latency-svc-p5bmd
Oct 22 20:31:12.044: INFO: Got endpoints: latency-svc-p5bmd [830.471908ms]
Oct 22 20:31:12.068: INFO: Created: latency-svc-w969z
Oct 22 20:31:12.078: INFO: Got endpoints: latency-svc-w969z [844.157387ms]
Oct 22 20:31:12.104: INFO: Created: latency-svc-tv7vc
Oct 22 20:31:12.114: INFO: Got endpoints: latency-svc-tv7vc [799.597616ms]
Oct 22 20:31:12.137: INFO: Created: latency-svc-2j65v
Oct 22 20:31:12.188: INFO: Got endpoints: latency-svc-2j65v [809.560844ms]
Oct 22 20:31:12.190: INFO: Created: latency-svc-2s8x4
Oct 22 20:31:12.198: INFO: Got endpoints: latency-svc-2s8x4 [758.583874ms]
Oct 22 20:31:12.218: INFO: Created: latency-svc-t9p7k
Oct 22 20:31:12.235: INFO: Got endpoints: latency-svc-t9p7k [765.723618ms]
Oct 22 20:31:12.260: INFO: Created: latency-svc-q6j8x
Oct 22 20:31:12.271: INFO: Got endpoints: latency-svc-q6j8x [753.384818ms]
Oct 22 20:31:12.326: INFO: Created: latency-svc-sgh2m
Oct 22 20:31:12.344: INFO: Got endpoints: latency-svc-sgh2m [718.873957ms]
Oct 22 20:31:12.408: INFO: Created: latency-svc-t9vfg
Oct 22 20:31:12.470: INFO: Got endpoints: latency-svc-t9vfg [831.834498ms]
Oct 22 20:31:12.490: INFO: Created: latency-svc-ndkn4
Oct 22 20:31:12.510: INFO: Got endpoints: latency-svc-ndkn4 [830.497295ms]
Oct 22 20:31:12.542: INFO: Created: latency-svc-fj75n
Oct 22 20:31:12.566: INFO: Got endpoints: latency-svc-fj75n [793.981022ms]
Oct 22 20:31:12.638: INFO: Created: latency-svc-sxm7d
Oct 22 20:31:12.641: INFO: Got endpoints: latency-svc-sxm7d [846.414286ms]
Oct 22 20:31:12.676: INFO: Created: latency-svc-7kz9j
Oct 22 20:31:12.698: INFO: Got endpoints: latency-svc-7kz9j [867.321319ms]
Oct 22 20:31:12.728: INFO: Created: latency-svc-jcv6v
Oct 22 20:31:12.799: INFO: Got endpoints: latency-svc-jcv6v [847.415175ms]
Oct 22 20:31:12.801: INFO: Created: latency-svc-mm4g5
Oct 22 20:31:12.813: INFO: Got endpoints: latency-svc-mm4g5 [844.122732ms]
Oct 22 20:31:12.955: INFO: Created: latency-svc-dss2k
Oct 22 20:31:12.959: INFO: Got endpoints: latency-svc-dss2k [915.054499ms]
Oct 22 20:31:13.043: INFO: Created: latency-svc-kt5zj
Oct 22 20:31:13.110: INFO: Got endpoints: latency-svc-kt5zj [1.032247645s]
Oct 22 20:31:13.113: INFO: Created: latency-svc-xrkm5
Oct 22 20:31:13.125: INFO: Got endpoints: latency-svc-xrkm5 [1.011059559s]
Oct 22 20:31:13.190: INFO: Created: latency-svc-fvww7
Oct 22 20:31:13.203: INFO: Got endpoints: latency-svc-fvww7 [1.014940308s]
Oct 22 20:31:13.249: INFO: Created: latency-svc-7tr28
Oct 22 20:31:13.251: INFO: Got endpoints: latency-svc-7tr28 [1.05318957s]
Oct 22 20:31:13.278: INFO: Created: latency-svc-m4bzz
Oct 22 20:31:13.294: INFO: Got endpoints: latency-svc-m4bzz [1.059548512s]
Oct 22 20:31:13.316: INFO: Created: latency-svc-fq696
Oct 22 20:31:13.330: INFO: Got endpoints: latency-svc-fq696 [1.059407501s]
Oct 22 20:31:13.388: INFO: Created: latency-svc-m5j86
Oct 22 20:31:13.402: INFO: Got endpoints: latency-svc-m5j86 [1.058527884s]
Oct 22 20:31:13.429: INFO: Created: latency-svc-4hvkq
Oct 22 20:31:13.445: INFO: Got endpoints: latency-svc-4hvkq [975.116926ms]
Oct 22 20:31:13.472: INFO: Created: latency-svc-rqfkh
Oct 22 20:31:13.523: INFO: Got endpoints: latency-svc-rqfkh [1.012875606s]
Oct 22 20:31:13.537: INFO: Created: latency-svc-6gd8w
Oct 22 20:31:13.559: INFO: Got endpoints: latency-svc-6gd8w [993.38856ms]
Oct 22 20:31:13.667: INFO: Created: latency-svc-tn6xq
Oct 22 20:31:13.674: INFO: Got endpoints: latency-svc-tn6xq [1.033500947s]
Oct 22 20:31:13.705: INFO: Created: latency-svc-gssnm
Oct 22 20:31:13.723: INFO: Got endpoints: latency-svc-gssnm [1.024811516s]
Oct 22 20:31:13.754: INFO: Created: latency-svc-b7g2f
Oct 22 20:31:13.787: INFO: Got endpoints: latency-svc-b7g2f [987.614484ms]
Oct 22 20:31:13.812: INFO: Created: latency-svc-nkbs7
Oct 22 20:31:13.835: INFO: Got endpoints: latency-svc-nkbs7 [1.021570262s]
Oct 22 20:31:13.866: INFO: Created: latency-svc-fxrc6
Oct 22 20:31:13.880: INFO: Got endpoints: latency-svc-fxrc6 [920.872654ms]
Oct 22 20:31:13.943: INFO: Created: latency-svc-qkw2g
Oct 22 20:31:13.946: INFO: Got endpoints: latency-svc-qkw2g [835.335712ms]
Oct 22 20:31:13.976: INFO: Created: latency-svc-nnfc6
Oct 22 20:31:13.994: INFO: Got endpoints: latency-svc-nnfc6 [869.051116ms]
Oct 22 20:31:14.018: INFO: Created: latency-svc-pfs6q
Oct 22 20:31:14.031: INFO: Got endpoints: latency-svc-pfs6q [827.310862ms]
Oct 22 20:31:14.080: INFO: Created: latency-svc-qdzfb
Oct 22 20:31:14.084: INFO: Got endpoints: latency-svc-qdzfb [832.373762ms]
Oct 22 20:31:14.111: INFO: Created: latency-svc-nwrht
Oct 22 20:31:14.127: INFO: Got endpoints: latency-svc-nwrht [832.713808ms]
Oct 22 20:31:14.153: INFO: Created: latency-svc-n8ppq
Oct 22 20:31:14.170: INFO: Got endpoints: latency-svc-n8ppq [839.308098ms]
Oct 22 20:31:14.224: INFO: Created: latency-svc-7x79p
Oct 22 20:31:14.228: INFO: Got endpoints: latency-svc-7x79p [825.308005ms]
Oct 22 20:31:14.251: INFO: Created: latency-svc-zjw5s
Oct 22 20:31:14.266: INFO: Got endpoints: latency-svc-zjw5s [821.233998ms]
Oct 22 20:31:14.290: INFO: Created: latency-svc-924t4
Oct 22 20:31:14.296: INFO: Got endpoints: latency-svc-924t4 [772.571674ms]
Oct 22 20:31:14.317: INFO: Created: latency-svc-kdl77
Oct 22 20:31:14.368: INFO: Got endpoints: latency-svc-kdl77 [808.526606ms]
Oct 22 20:31:14.399: INFO: Created: latency-svc-zznlm
Oct 22 20:31:14.423: INFO: Got endpoints: latency-svc-zznlm [748.214579ms]
Oct 22 20:31:14.519: INFO: Created: latency-svc-vhz6b
Oct 22 20:31:14.521: INFO: Got endpoints: latency-svc-vhz6b [798.064959ms]
Oct 22 20:31:14.597: INFO: Created: latency-svc-lz2tq
Oct 22 20:31:14.615: INFO: Got endpoints: latency-svc-lz2tq [828.149117ms]
Oct 22 20:31:14.679: INFO: Created: latency-svc-xt68m
Oct 22 20:31:14.707: INFO: Got endpoints: latency-svc-xt68m [872.325755ms]
Oct 22 20:31:14.747: INFO: Created: latency-svc-g5qzd
Oct 22 20:31:14.766: INFO: Got endpoints: latency-svc-g5qzd [885.644767ms]
Oct 22 20:31:14.835: INFO: Created: latency-svc-ssg8d
Oct 22 20:31:14.856: INFO: Got endpoints: latency-svc-ssg8d [909.77746ms]
Oct 22 20:31:14.907: INFO: Created: latency-svc-q622n
Oct 22 20:31:14.996: INFO: Got endpoints: latency-svc-q622n [1.002072715s]
Oct 22 20:31:15.017: INFO: Created: latency-svc-b2d2w
Oct 22 20:31:15.036: INFO: Got endpoints: latency-svc-b2d2w [1.005342703s]
Oct 22 20:31:15.068: INFO: Created: latency-svc-zxctg
Oct 22 20:31:15.084: INFO: Got endpoints: latency-svc-zxctg [1.00033247s]
Oct 22 20:31:15.124: INFO: Created: latency-svc-qgrpv
Oct 22 20:31:15.145: INFO: Got endpoints: latency-svc-qgrpv [1.018261543s]
Oct 22 20:31:15.173: INFO: Created: latency-svc-4fxrf
Oct 22 20:31:15.187: INFO: Got endpoints: latency-svc-4fxrf [1.016985647s]
Oct 22 20:31:15.209: INFO: Created: latency-svc-djhgv
Oct 22 20:31:15.266: INFO: Got endpoints: latency-svc-djhgv [1.038189233s]
Oct 22 20:31:15.268: INFO: Created: latency-svc-k9lwd
Oct 22 20:31:15.277: INFO: Got endpoints: latency-svc-k9lwd [1.010746019s]
Oct 22 20:31:15.302: INFO: Created: latency-svc-6vx9j
Oct 22 20:31:15.319: INFO: Got endpoints: latency-svc-6vx9j [1.023391508s]
Oct 22 20:31:15.343: INFO: Created: latency-svc-5r296
Oct 22 20:31:15.356: INFO: Got endpoints: latency-svc-5r296 [987.875808ms]
Oct 22 20:31:15.410: INFO: Created: latency-svc-q9cnd
Oct 22 20:31:15.413: INFO: Got endpoints: latency-svc-q9cnd [990.242162ms]
Oct 22 20:31:15.467: INFO: Created: latency-svc-vprw9
Oct 22 20:31:15.482: INFO: Got endpoints: latency-svc-vprw9 [960.900658ms]
Oct 22 20:31:15.571: INFO: Created: latency-svc-wl5cq
Oct 22 20:31:15.577: INFO: Got endpoints: latency-svc-wl5cq [961.73676ms]
Oct 22 20:31:15.601: INFO: Created: latency-svc-9rjfm
Oct 22 20:31:15.615: INFO: Got endpoints: latency-svc-9rjfm [907.441603ms]
Oct 22 20:31:15.637: INFO: Created: latency-svc-cprws
Oct 22 20:31:15.651: INFO: Got endpoints: latency-svc-cprws [885.253427ms]
Oct 22 20:31:15.710: INFO: Created: latency-svc-stq9m
Oct 22 20:31:15.717: INFO: Got endpoints: latency-svc-stq9m [861.786064ms]
Oct 22 20:31:15.761: INFO: Created: latency-svc-7rzzb
Oct 22 20:31:15.777: INFO: Got endpoints: latency-svc-7rzzb [780.887379ms]
Oct 22 20:31:15.799: INFO: Created: latency-svc-gbj69
Oct 22 20:31:15.853: INFO: Got endpoints: latency-svc-gbj69 [816.56118ms]
Oct 22 20:31:15.874: INFO: Created: latency-svc-7whmr
Oct 22 20:31:15.880: INFO: Got endpoints: latency-svc-7whmr [795.669056ms]
Oct 22 20:31:15.911: INFO: Created: latency-svc-lvxgx
Oct 22 20:31:15.928: INFO: Got endpoints: latency-svc-lvxgx [782.976614ms]
Oct 22 20:31:15.985: INFO: Created: latency-svc-qlcr9
Oct 22 20:31:15.987: INFO: Got endpoints: latency-svc-qlcr9 [800.113335ms]
Oct 22 20:31:16.033: INFO: Created: latency-svc-hkc2x
Oct 22 20:31:16.049: INFO: Got endpoints: latency-svc-hkc2x [782.65785ms]
Oct 22 20:31:16.075: INFO: Created: latency-svc-nd6ct
Oct 22 20:31:16.122: INFO: Got endpoints: latency-svc-nd6ct [845.011094ms]
Oct 22 20:31:16.135: INFO: Created: latency-svc-r2cr6
Oct 22 20:31:16.145: INFO: Got endpoints: latency-svc-r2cr6 [825.673346ms]
Oct 22 20:31:16.168: INFO: Created: latency-svc-xlrkk
Oct 22 20:31:16.192: INFO: Got endpoints: latency-svc-xlrkk [836.560055ms]
Oct 22 20:31:16.260: INFO: Created: latency-svc-9md7h
Oct 22 20:31:16.263: INFO: Got endpoints: latency-svc-9md7h [849.502685ms]
Oct 22 20:31:16.297: INFO: Created: latency-svc-4q5l2
Oct 22 20:31:16.315: INFO: Got endpoints: latency-svc-4q5l2 [832.416777ms]
Oct 22 20:31:16.340: INFO: Created: latency-svc-5nnsh
Oct 22 20:31:16.350: INFO: Got endpoints: latency-svc-5nnsh [773.16953ms]
Oct 22 20:31:16.393: INFO: Created: latency-svc-5jq2h
Oct 22 20:31:16.414: INFO: Got endpoints: latency-svc-5jq2h [799.529766ms]
Oct 22 20:31:16.458: INFO: Created: latency-svc-ztjsb
Oct 22 20:31:16.477: INFO: Got endpoints: latency-svc-ztjsb [825.766361ms]
Oct 22 20:31:16.543: INFO: Created: latency-svc-hhmms
Oct 22 20:31:16.544: INFO: Got endpoints: latency-svc-hhmms [827.075641ms]
Oct 22 20:31:16.727: INFO: Created: latency-svc-h4xxs
Oct 22 20:31:16.759: INFO: Got endpoints: latency-svc-h4xxs [981.710324ms]
Oct 22 20:31:16.784: INFO: Created: latency-svc-x4z58
Oct 22 20:31:16.802: INFO: Got endpoints: latency-svc-x4z58 [949.010471ms]
Oct 22 20:31:16.825: INFO: Created: latency-svc-v2jv8
Oct 22 20:31:16.865: INFO: Got endpoints: latency-svc-v2jv8 [984.957611ms]
Oct 22 20:31:16.882: INFO: Created: latency-svc-xq2p8
Oct 22 20:31:16.916: INFO: Got endpoints: latency-svc-xq2p8 [987.09432ms]
Oct 22 20:31:17.033: INFO: Created: latency-svc-bq6mk
Oct 22 20:31:17.039: INFO: Got endpoints: latency-svc-bq6mk [1.052275091s]
Oct 22 20:31:17.092: INFO: Created: latency-svc-t95nl
Oct 22 20:31:17.117: INFO: Got endpoints: latency-svc-t95nl [1.068617296s]
Oct 22 20:31:17.179: INFO: Created: latency-svc-wxfj6
Oct 22 20:31:17.196: INFO: Got endpoints: latency-svc-wxfj6 [1.073630764s]
Oct 22 20:31:17.227: INFO: Created: latency-svc-wzdcx
Oct 22 20:31:17.245: INFO: Got endpoints: latency-svc-wzdcx [1.099846141s]
Oct 22 20:31:17.309: INFO: Created: latency-svc-dqj7l
Oct 22 20:31:17.316: INFO: Got endpoints: latency-svc-dqj7l [1.12346366s]
Oct 22 20:31:17.338: INFO: Created: latency-svc-bj4px
Oct 22 20:31:17.352: INFO: Got endpoints: latency-svc-bj4px [1.089622971s]
Oct 22 20:31:17.375: INFO: Created: latency-svc-xb4gr
Oct 22 20:31:17.386: INFO: Got endpoints: latency-svc-xb4gr [1.07084068s]
Oct 22 20:31:17.478: INFO: Created: latency-svc-v7s6z
Oct 22 20:31:17.503: INFO: Got endpoints: latency-svc-v7s6z [1.153007258s]
Oct 22 20:31:17.597: INFO: Created: latency-svc-vmxd9
Oct 22 20:31:17.620: INFO: Got endpoints: latency-svc-vmxd9 [1.205367778s]
Oct 22 20:31:17.620: INFO: Created: latency-svc-5t8hk
Oct 22 20:31:17.629: INFO: Got endpoints: latency-svc-5t8hk [1.152063328s]
Oct 22 20:31:17.653: INFO: Created: latency-svc-fnkmc
Oct 22 20:31:17.688: INFO: Got endpoints: latency-svc-fnkmc [1.143779038s]
Oct 22 20:31:17.758: INFO: Created: latency-svc-nqkh9
Oct 22 20:31:17.786: INFO: Got endpoints: latency-svc-nqkh9 [1.026683868s]
Oct 22 20:31:17.818: INFO: Created: latency-svc-stchf
Oct 22 20:31:17.828: INFO: Got endpoints: latency-svc-stchf [1.026098922s]
Oct 22 20:31:17.895: INFO: Created: latency-svc-chsrf
Oct 22 20:31:17.898: INFO: Got endpoints: latency-svc-chsrf [1.033238905s]
Oct 22 20:31:17.953: INFO: Created: latency-svc-9cb8l
Oct 22 20:31:17.966: INFO: Got endpoints: latency-svc-9cb8l [1.050573477s]
Oct 22 20:31:17.989: INFO: Created: latency-svc-8mz86
Oct 22 20:31:18.026: INFO: Got endpoints: latency-svc-8mz86 [986.975639ms]
Oct 22 20:31:18.043: INFO: Created: latency-svc-2v9pj
Oct 22 20:31:18.058: INFO: Got endpoints: latency-svc-2v9pj [940.35306ms]
Oct 22 20:31:18.083: INFO: Created: latency-svc-8vrws
Oct 22 20:31:18.108: INFO: Got endpoints: latency-svc-8vrws [912.521437ms]
Oct 22 20:31:18.170: INFO: Created: latency-svc-62d9q
Oct 22 20:31:18.173: INFO: Got endpoints: latency-svc-62d9q [927.768291ms]
Oct 22 20:31:18.205: INFO: Created: latency-svc-c4fp6
Oct 22 20:31:18.226: INFO: Got endpoints: latency-svc-c4fp6 [909.999632ms]
Oct 22 20:31:18.250: INFO: Created: latency-svc-n2dm9
Oct 22 20:31:18.268: INFO: Got endpoints: latency-svc-n2dm9 [915.545164ms]
Oct 22 20:31:18.315: INFO: Created: latency-svc-gzhpc
Oct 22 20:31:18.322: INFO: Got endpoints: latency-svc-gzhpc [936.629466ms]
Oct 22 20:31:18.342: INFO: Created: latency-svc-ft8gz
Oct 22 20:31:18.359: INFO: Got endpoints: latency-svc-ft8gz [855.553702ms]
Oct 22 20:31:18.379: INFO: Created: latency-svc-rz725
Oct 22 20:31:18.395: INFO: Got endpoints: latency-svc-rz725 [775.238255ms]
Oct 22 20:31:18.446: INFO: Created: latency-svc-blj5m
Oct 22 20:31:18.466: INFO: Got endpoints: latency-svc-blj5m [836.788542ms]
Oct 22 20:31:18.508: INFO: Created: latency-svc-qxnr6
Oct 22 20:31:18.522: INFO: Got endpoints: latency-svc-qxnr6 [833.155122ms]
Oct 22 20:31:18.591: INFO: Created: latency-svc-2wlnj
Oct 22 20:31:18.605: INFO: Got endpoints: latency-svc-2wlnj [819.209711ms]
Oct 22 20:31:18.670: INFO: Created: latency-svc-v6zlf
Oct 22 20:31:18.684: INFO: Got endpoints: latency-svc-v6zlf [855.926443ms]
Oct 22 20:31:18.751: INFO: Created: latency-svc-gtnrr
Oct 22 20:31:18.769: INFO: Got endpoints: latency-svc-gtnrr [870.526852ms]
Oct 22 20:31:18.793: INFO: Created: latency-svc-hdzd8
Oct 22 20:31:18.811: INFO: Got endpoints: latency-svc-hdzd8 [844.457563ms]
Oct 22 20:31:18.835: INFO: Created: latency-svc-78zxm
Oct 22 20:31:18.877: INFO: Got endpoints: latency-svc-78zxm [850.590373ms]
Oct 22 20:31:18.904: INFO: Created: latency-svc-5qwgp
Oct 22 20:31:18.937: INFO: Got endpoints: latency-svc-5qwgp [879.319508ms]
Oct 22 20:31:18.973: INFO: Created: latency-svc-2zjvf
Oct 22 20:31:19.020: INFO: Got endpoints: latency-svc-2zjvf [911.933445ms]
Oct 22 20:31:19.032: INFO: Created: latency-svc-d46nn
Oct 22 20:31:19.045: INFO: Got endpoints: latency-svc-d46nn [872.357062ms]
Oct 22 20:31:19.075: INFO: Created: latency-svc-rjlm7
Oct 22 20:31:19.099: INFO: Got endpoints: latency-svc-rjlm7 [873.595908ms]
Oct 22 20:31:19.152: INFO: Created: latency-svc-zkg4p
Oct 22 20:31:19.160: INFO: Got endpoints: latency-svc-zkg4p [891.648617ms]
Oct 22 20:31:19.186: INFO: Created: latency-svc-nfvcm
Oct 22 20:31:19.202: INFO: Got endpoints: latency-svc-nfvcm [879.722554ms]
Oct 22 20:31:19.224: INFO: Created: latency-svc-fc8rm
Oct 22 20:31:19.238: INFO: Got endpoints: latency-svc-fc8rm [879.425936ms]
Oct 22 20:31:19.290: INFO: Created: latency-svc-8qqfj
Oct 22 20:31:19.293: INFO: Got endpoints: latency-svc-8qqfj [897.440948ms]
Oct 22 20:31:19.348: INFO: Created: latency-svc-vgrng
Oct 22 20:31:19.359: INFO: Got endpoints: latency-svc-vgrng [892.740619ms]
Oct 22 20:31:19.384: INFO: Created: latency-svc-khb44
Oct 22 20:31:19.421: INFO: Got endpoints: latency-svc-khb44 [899.818269ms]
Oct 22 20:31:19.434: INFO: Created: latency-svc-w9pcr
Oct 22 20:31:19.450: INFO: Got endpoints: latency-svc-w9pcr [844.299789ms]
Oct 22 20:31:19.470: INFO: Created: latency-svc-x6w8d
Oct 22 20:31:19.485: INFO: Got endpoints: latency-svc-x6w8d [801.554247ms]
Oct 22 20:31:19.518: INFO: Created: latency-svc-bt7gx
Oct 22 20:31:19.565: INFO: Got endpoints: latency-svc-bt7gx [796.731088ms]
Oct 22 20:31:19.577: INFO: Created: latency-svc-vtl6d
Oct 22 20:31:19.594: INFO: Got endpoints: latency-svc-vtl6d [783.646944ms]
Oct 22 20:31:19.618: INFO: Created: latency-svc-9l8kd
Oct 22 20:31:19.630: INFO: Got endpoints: latency-svc-9l8kd [753.455656ms]
Oct 22 20:31:19.630: INFO: Latencies: [106.183343ms 122.268275ms 273.731841ms 342.576604ms 425.671641ms 464.634712ms 508.856505ms 573.043187ms 609.934649ms 658.274731ms 718.873957ms 723.738423ms 726.690265ms 732.943773ms 735.027308ms 736.637134ms 744.616985ms 748.214579ms 753.384818ms 753.455656ms 758.583874ms 762.538899ms 765.723618ms 772.571674ms 773.16953ms 775.238255ms 777.933058ms 780.887379ms 782.65785ms 782.976614ms 783.646944ms 790.963265ms 791.06706ms 793.981022ms 795.669056ms 796.731088ms 798.064959ms 799.529766ms 799.597616ms 799.768789ms 800.113335ms 801.554247ms 808.526606ms 809.560844ms 816.56118ms 819.209711ms 820.365135ms 821.233998ms 825.308005ms 825.367904ms 825.673346ms 825.766361ms 827.075641ms 827.310862ms 828.149117ms 830.471908ms 830.497295ms 831.834498ms 832.018736ms 832.373762ms 832.416777ms 832.713808ms 833.11261ms 833.155122ms 833.347195ms 835.335712ms 836.560055ms 836.788542ms 839.308098ms 843.395726ms 844.122732ms 844.157387ms 844.299789ms 844.457563ms 845.011094ms 845.882973ms 846.414286ms 847.415175ms 849.502685ms 850.590373ms 853.153257ms 854.312966ms 855.216791ms 855.553702ms 855.926443ms 861.786064ms 863.599574ms 867.269975ms 867.321319ms 868.097533ms 868.578303ms 869.051116ms 870.526852ms 872.325755ms 872.357062ms 873.595908ms 875.102562ms 876.521948ms 876.568867ms 879.319508ms 879.418257ms 879.425936ms 879.722554ms 885.253427ms 885.644767ms 890.549756ms 891.648617ms 892.740619ms 893.979447ms 897.204041ms 897.279122ms 897.440948ms 899.818269ms 900.495734ms 902.676401ms 904.5998ms 905.19308ms 907.441603ms 909.77746ms 909.999632ms 911.933445ms 912.521437ms 914.72158ms 914.850585ms 915.054499ms 915.545164ms 919.617125ms 920.872654ms 927.768291ms 929.045928ms 936.629466ms 939.524167ms 940.35306ms 949.010471ms 951.943706ms 951.989983ms 960.900658ms 961.73676ms 965.690964ms 975.116926ms 981.710324ms 984.957611ms 985.933955ms 986.975639ms 987.09432ms 987.596406ms 987.614484ms 987.875808ms 990.242162ms 993.38856ms 1.00033247s 1.002072715s 1.005342703s 1.010746019s 1.011059559s 1.012875606s 1.014940308s 1.016985647s 1.018261543s 1.021570262s 1.023391508s 1.024262737s 1.024811516s 1.026098922s 1.026683868s 1.032247645s 1.033238905s 1.033500947s 1.038189233s 1.050573477s 1.052275091s 1.05318957s 1.058527884s 1.059407501s 1.059548512s 1.068617296s 1.07084068s 1.073630764s 1.089622971s 1.099846141s 1.12346366s 1.143779038s 1.152063328s 1.153007258s 1.205367778s 1.574779535s 1.600506645s 1.606694739s 1.721333399s 1.768281538s 1.779285841s 1.796328541s 1.801893246s 1.805700098s 1.818145146s 1.819526458s 1.820286401s 1.829614464s 1.843433206s 1.86117903s]
Oct 22 20:31:19.631: INFO: 50 %ile: 879.418257ms
Oct 22 20:31:19.631: INFO: 90 %ile: 1.12346366s
Oct 22 20:31:19.631: INFO: 99 %ile: 1.843433206s
Oct 22 20:31:19.631: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:31:19.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8045" for this suite.
Oct 22 20:31:45.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:31:45.772: INFO: namespace svc-latency-8045 deletion completed in 26.121277344s

• [SLOW TEST:42.279 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:31:45.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Oct 22 20:31:45.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4620'
Oct 22 20:31:45.922: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Oct 22 20:31:45.922: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Oct 22 20:31:47.654: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-2v579]
Oct 22 20:31:47.654: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-2v579" in namespace "kubectl-4620" to be "running and ready"
Oct 22 20:31:47.847: INFO: Pod "e2e-test-nginx-rc-2v579": Phase="Pending", Reason="", readiness=false. Elapsed: 193.310643ms
Oct 22 20:31:49.851: INFO: Pod "e2e-test-nginx-rc-2v579": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196989978s
Oct 22 20:31:51.856: INFO: Pod "e2e-test-nginx-rc-2v579": Phase="Running", Reason="", readiness=true. Elapsed: 4.201493791s
Oct 22 20:31:51.856: INFO: Pod "e2e-test-nginx-rc-2v579" satisfied condition "running and ready"
Oct 22 20:31:51.856: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-2v579]
Oct 22 20:31:51.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-4620'
Oct 22 20:31:51.968: INFO: stderr: ""
Oct 22 20:31:51.968: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Oct 22 20:31:51.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4620'
Oct 22 20:31:52.072: INFO: stderr: ""
Oct 22 20:31:52.072: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:31:52.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4620" for this suite.
Oct 22 20:32:14.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:32:14.245: INFO: namespace kubectl-4620 deletion completed in 22.169153075s

• [SLOW TEST:28.473 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:32:14.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-49925d1f-328c-4aeb-8ddd-181fcc0b02dd
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:32:20.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9939" for this suite.
Oct 22 20:32:42.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:32:42.488: INFO: namespace configmap-9939 deletion completed in 22.090348891s

• [SLOW TEST:28.242 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:32:42.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 20:32:42.574: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Oct 22 20:32:44.624: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:32:45.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1042" for this suite.
Oct 22 20:32:51.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:32:51.875: INFO: namespace replication-controller-1042 deletion completed in 6.217591459s

• [SLOW TEST:9.387 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:32:51.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-df82d0b8-e820-49c9-b6e0-19e697e2304a
STEP: Creating a pod to test consume secrets
Oct 22 20:32:52.261: INFO: Waiting up to 5m0s for pod "pod-secrets-accdf183-d936-43af-bd27-ce7612faaf56" in namespace "secrets-6566" to be "success or failure"
Oct 22 20:32:52.783: INFO: Pod "pod-secrets-accdf183-d936-43af-bd27-ce7612faaf56": Phase="Pending", Reason="", readiness=false. Elapsed: 522.286794ms
Oct 22 20:32:54.788: INFO: Pod "pod-secrets-accdf183-d936-43af-bd27-ce7612faaf56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.526834689s
Oct 22 20:32:56.792: INFO: Pod "pod-secrets-accdf183-d936-43af-bd27-ce7612faaf56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.530814794s
STEP: Saw pod success
Oct 22 20:32:56.792: INFO: Pod "pod-secrets-accdf183-d936-43af-bd27-ce7612faaf56" satisfied condition "success or failure"
Oct 22 20:32:56.795: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-accdf183-d936-43af-bd27-ce7612faaf56 container secret-volume-test: 
STEP: delete the pod
Oct 22 20:32:56.914: INFO: Waiting for pod pod-secrets-accdf183-d936-43af-bd27-ce7612faaf56 to disappear
Oct 22 20:32:56.938: INFO: Pod pod-secrets-accdf183-d936-43af-bd27-ce7612faaf56 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:32:56.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6566" for this suite.
Oct 22 20:33:02.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:33:03.029: INFO: namespace secrets-6566 deletion completed in 6.088432258s

• [SLOW TEST:11.154 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:33:03.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 20:33:03.147: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Oct 22 20:33:09.389: INFO: Waiting up to 5m0s for pod "var-expansion-f9abeb30-faee-4707-900c-ccc0a7b7eb5f" in namespace "var-expansion-343" to be "success or failure"
Oct 22 20:33:09.399: INFO: Pod "var-expansion-f9abeb30-faee-4707-900c-ccc0a7b7eb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.276378ms
Oct 22 20:33:12.363: INFO: Pod "var-expansion-f9abeb30-faee-4707-900c-ccc0a7b7eb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.974257684s
Oct 22 20:33:14.368: INFO: Pod "var-expansion-f9abeb30-faee-4707-900c-ccc0a7b7eb5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.978665365s
Oct 22 20:33:16.372: INFO: Pod "var-expansion-f9abeb30-faee-4707-900c-ccc0a7b7eb5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.983089809s
STEP: Saw pod success
Oct 22 20:33:16.372: INFO: Pod "var-expansion-f9abeb30-faee-4707-900c-ccc0a7b7eb5f" satisfied condition "success or failure"
Oct 22 20:33:16.375: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-f9abeb30-faee-4707-900c-ccc0a7b7eb5f container dapi-container: 
STEP: delete the pod
Oct 22 20:33:16.394: INFO: Waiting for pod var-expansion-f9abeb30-faee-4707-900c-ccc0a7b7eb5f to disappear
Oct 22 20:33:16.399: INFO: Pod var-expansion-f9abeb30-faee-4707-900c-ccc0a7b7eb5f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:33:16.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-343" for this suite.
Oct 22 20:33:22.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:33:22.561: INFO: namespace var-expansion-343 deletion completed in 6.159507711s

• [SLOW TEST:13.263 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Oct 22 20:33:22.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Oct 22 20:33:26.700: INFO: Waiting up to 5m0s for pod "client-envvars-36393306-7a88-4c37-8532-e59969abda0b" in namespace "pods-7047" to be "success or failure"
Oct 22 20:33:26.705: INFO: Pod "client-envvars-36393306-7a88-4c37-8532-e59969abda0b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.282424ms
Oct 22 20:33:28.800: INFO: Pod "client-envvars-36393306-7a88-4c37-8532-e59969abda0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100739133s
Oct 22 20:33:30.805: INFO: Pod "client-envvars-36393306-7a88-4c37-8532-e59969abda0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104957626s
STEP: Saw pod success
Oct 22 20:33:30.805: INFO: Pod "client-envvars-36393306-7a88-4c37-8532-e59969abda0b" satisfied condition "success or failure"
Oct 22 20:33:30.807: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-36393306-7a88-4c37-8532-e59969abda0b container env3cont: 
STEP: delete the pod
Oct 22 20:33:30.843: INFO: Waiting for pod client-envvars-36393306-7a88-4c37-8532-e59969abda0b to disappear
Oct 22 20:33:30.855: INFO: Pod client-envvars-36393306-7a88-4c37-8532-e59969abda0b no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Oct 22 20:33:30.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7047" for this suite.
Oct 22 20:34:20.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Oct 22 20:34:20.946: INFO: namespace pods-7047 deletion completed in 50.08829541s

• [SLOW TEST:58.384 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSOct 22 20:34:20.947: INFO: Running AfterSuite actions on all nodes
Oct 22 20:34:20.947: INFO: Running AfterSuite actions on node 1
Oct 22 20:34:20.947: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 6162.548 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS