I1127 20:48:07.709228 7 e2e.go:243] Starting e2e run "e9fa431c-5999-4c55-b468-3c0829a3b1cf" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1606510076 - Will randomize all specs Will run 215 of 4413 specs Nov 27 20:48:09.033: INFO: >>> kubeConfig: /root/.kube/config Nov 27 20:48:09.101: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 27 20:48:09.262: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 27 20:48:09.425: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 27 20:48:09.425: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Nov 27 20:48:09.426: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 27 20:48:09.466: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Nov 27 20:48:09.466: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 27 20:48:09.467: INFO: e2e test version: v1.15.12 Nov 27 20:48:09.470: INFO: kube-apiserver version: v1.15.11 SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 20:48:09.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Nov 27 20:48:09.594: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 27 20:48:14.879: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 20:48:14.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9" for this suite. Nov 27 20:48:20.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 20:48:21.214: INFO: namespace container-runtime-9 deletion completed in 6.257241353s • [SLOW TEST:11.740 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 20:48:21.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Nov 27 20:48:21.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4288' Nov 27 20:48:25.207: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Nov 27 20:48:25.208: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Nov 27 20:48:27.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4288' Nov 27 20:48:28.869: INFO: stderr: "" Nov 27 20:48:28.869: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 20:48:28.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4288" for this suite. Nov 27 20:48:50.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 20:48:51.068: INFO: namespace kubectl-4288 deletion completed in 22.188901572s • [SLOW TEST:29.849 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 20:48:51.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-63678625-ba60-49d0-abe3-382f906c7daa in namespace container-probe-3740 Nov 27 20:48:55.243: INFO: Started pod busybox-63678625-ba60-49d0-abe3-382f906c7daa in namespace container-probe-3740 STEP: checking the pod's current state and verifying that restartCount is present Nov 27 20:48:55.250: INFO: Initial restart count of pod busybox-63678625-ba60-49d0-abe3-382f906c7daa is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 20:52:56.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3740" for this suite. Nov 27 20:53:02.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 20:53:02.649: INFO: namespace container-probe-3740 deletion completed in 6.20122818s • [SLOW TEST:251.577 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 20:53:02.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-4gpc STEP: Creating a pod to test atomic-volume-subpath Nov 27 20:53:02.821: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4gpc" in namespace "subpath-4494" to be "success or failure" Nov 27 20:53:02.887: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Pending", Reason="", readiness=false. Elapsed: 64.908314ms Nov 27 20:53:04.894: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072917027s Nov 27 20:53:06.902: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Running", Reason="", readiness=true. Elapsed: 4.080579166s Nov 27 20:53:08.910: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Running", Reason="", readiness=true. Elapsed: 6.08802893s Nov 27 20:53:10.918: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Running", Reason="", readiness=true. Elapsed: 8.096915812s Nov 27 20:53:12.925: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Running", Reason="", readiness=true. Elapsed: 10.10398397s Nov 27 20:53:14.933: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Running", Reason="", readiness=true. Elapsed: 12.111378838s Nov 27 20:53:16.940: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Running", Reason="", readiness=true. Elapsed: 14.118278942s Nov 27 20:53:18.947: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Running", Reason="", readiness=true. Elapsed: 16.125473819s Nov 27 20:53:20.952: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Running", Reason="", readiness=true. Elapsed: 18.130966891s Nov 27 20:53:22.959: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Running", Reason="", readiness=true. Elapsed: 20.137734416s Nov 27 20:53:24.968: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Running", Reason="", readiness=true. Elapsed: 22.146211007s Nov 27 20:53:26.975: INFO: Pod "pod-subpath-test-configmap-4gpc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.153915708s STEP: Saw pod success Nov 27 20:53:26.976: INFO: Pod "pod-subpath-test-configmap-4gpc" satisfied condition "success or failure" Nov 27 20:53:26.982: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-4gpc container test-container-subpath-configmap-4gpc: STEP: delete the pod Nov 27 20:53:27.087: INFO: Waiting for pod pod-subpath-test-configmap-4gpc to disappear Nov 27 20:53:27.298: INFO: Pod pod-subpath-test-configmap-4gpc no longer exists STEP: Deleting pod pod-subpath-test-configmap-4gpc Nov 27 20:53:27.298: INFO: Deleting pod "pod-subpath-test-configmap-4gpc" in namespace "subpath-4494" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 20:53:27.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4494" for this suite. Nov 27 20:53:33.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 20:53:33.540: INFO: namespace subpath-4494 deletion completed in 6.229301229s • [SLOW TEST:30.887 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 20:53:33.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 27 20:53:53.678: INFO: Container started at 2020-11-27 20:53:36 +0000 UTC, pod became ready at 2020-11-27 20:53:53 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 20:53:53.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8406" for this suite. Nov 27 20:54:15.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 20:54:15.860: INFO: namespace container-probe-8406 deletion completed in 22.170553697s • [SLOW TEST:42.319 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 20:54:15.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Nov 27 20:54:15.986: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Nov 27 20:54:15.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6950' Nov 27 20:54:17.721: INFO: stderr: "" Nov 27 20:54:17.721: INFO: stdout: "service/redis-slave created\n" Nov 27 20:54:17.722: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Nov 27 20:54:17.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6950' Nov 27 20:54:19.471: INFO: stderr: "" Nov 27 20:54:19.471: INFO: stdout: "service/redis-master created\n" Nov 27 20:54:19.472: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Nov 27 20:54:19.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6950' Nov 27 20:54:21.479: INFO: stderr: "" Nov 27 20:54:21.479: INFO: stdout: "service/frontend created\n" Nov 27 20:54:21.485: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Nov 27 20:54:21.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6950' Nov 27 20:54:23.160: INFO: stderr: "" Nov 27 20:54:23.161: INFO: stdout: "deployment.apps/frontend created\n" Nov 27 20:54:23.162: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Nov 27 20:54:23.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6950' Nov 27 20:54:25.234: INFO: stderr: "" Nov 27 20:54:25.234: INFO: stdout: "deployment.apps/redis-master created\n" Nov 27 20:54:25.236: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Nov 27 20:54:25.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6950' Nov 27 20:54:27.687: INFO: stderr: "" Nov 27 20:54:27.687: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Nov 27 20:54:27.687: INFO: Waiting for all frontend pods to be Running. Nov 27 20:54:32.743: INFO: Waiting for frontend to serve content. Nov 27 20:54:32.766: INFO: Trying to add a new entry to the guestbook. Nov 27 20:54:32.782: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Nov 27 20:54:32.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6950' Nov 27 20:54:34.122: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 27 20:54:34.123: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Nov 27 20:54:34.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6950' Nov 27 20:54:35.395: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 27 20:54:35.395: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Nov 27 20:54:35.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6950' Nov 27 20:54:36.695: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 27 20:54:36.696: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Nov 27 20:54:36.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6950' Nov 27 20:54:38.129: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 27 20:54:38.129: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Nov 27 20:54:38.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6950' Nov 27 20:54:39.791: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 27 20:54:39.791: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Nov 27 20:54:39.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6950' Nov 27 20:54:41.076: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 27 20:54:41.076: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 20:54:41.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6950" for this suite. Nov 27 20:55:27.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 20:55:27.323: INFO: namespace kubectl-6950 deletion completed in 46.239540967s • [SLOW TEST:71.462 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 20:55:27.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-f0899108-6133-4f3a-8841-4f5cc0d066ef in namespace container-probe-4443 Nov 27 20:55:31.477: INFO: Started pod test-webserver-f0899108-6133-4f3a-8841-4f5cc0d066ef in namespace container-probe-4443 STEP: checking the pod's current state and verifying that restartCount is present Nov 27 20:55:31.481: INFO: Initial restart count of pod test-webserver-f0899108-6133-4f3a-8841-4f5cc0d066ef is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 20:59:32.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4443" for this suite. Nov 27 20:59:39.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 20:59:39.165: INFO: namespace container-probe-4443 deletion completed in 6.237407855s • [SLOW TEST:251.840 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 20:59:39.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f61691d6-83fb-4e20-8603-7cf7a665d8f9 STEP: Creating a pod to test consume configMaps Nov 27 20:59:39.307: INFO: Waiting up to 5m0s for pod "pod-configmaps-4074bb85-381e-4132-bd60-8cd9e6fc6071" in namespace "configmap-923" to be "success or failure" Nov 27 20:59:39.339: INFO: Pod "pod-configmaps-4074bb85-381e-4132-bd60-8cd9e6fc6071": Phase="Pending", Reason="", readiness=false. Elapsed: 32.129676ms Nov 27 20:59:41.346: INFO: Pod "pod-configmaps-4074bb85-381e-4132-bd60-8cd9e6fc6071": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039170832s Nov 27 20:59:43.354: INFO: Pod "pod-configmaps-4074bb85-381e-4132-bd60-8cd9e6fc6071": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046427393s STEP: Saw pod success Nov 27 20:59:43.354: INFO: Pod "pod-configmaps-4074bb85-381e-4132-bd60-8cd9e6fc6071" satisfied condition "success or failure" Nov 27 20:59:43.360: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4074bb85-381e-4132-bd60-8cd9e6fc6071 container configmap-volume-test: STEP: delete the pod Nov 27 20:59:43.383: INFO: Waiting for pod pod-configmaps-4074bb85-381e-4132-bd60-8cd9e6fc6071 to disappear Nov 27 20:59:43.387: INFO: Pod pod-configmaps-4074bb85-381e-4132-bd60-8cd9e6fc6071 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 20:59:43.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-923" for this suite. Nov 27 20:59:49.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 20:59:49.570: INFO: namespace configmap-923 deletion completed in 6.175912924s • [SLOW TEST:10.401 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 20:59:49.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Nov 27 20:59:49.654: INFO: Waiting up to 5m0s for pod "pod-649a7858-7013-4708-983c-29113c71148f" in namespace "emptydir-5500" to be "success or failure" Nov 27 20:59:49.670: INFO: Pod "pod-649a7858-7013-4708-983c-29113c71148f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.146386ms Nov 27 20:59:51.726: INFO: Pod "pod-649a7858-7013-4708-983c-29113c71148f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071909199s Nov 27 20:59:53.733: INFO: Pod "pod-649a7858-7013-4708-983c-29113c71148f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079000887s STEP: Saw pod success Nov 27 20:59:53.733: INFO: Pod "pod-649a7858-7013-4708-983c-29113c71148f" satisfied condition "success or failure" Nov 27 20:59:53.738: INFO: Trying to get logs from node iruya-worker2 pod pod-649a7858-7013-4708-983c-29113c71148f container test-container: STEP: delete the pod Nov 27 20:59:53.852: INFO: Waiting for pod pod-649a7858-7013-4708-983c-29113c71148f to disappear Nov 27 20:59:53.858: INFO: Pod pod-649a7858-7013-4708-983c-29113c71148f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 20:59:53.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5500" for this suite. Nov 27 20:59:59.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:00:00.059: INFO: namespace emptydir-5500 deletion completed in 6.194812009s • [SLOW TEST:10.488 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:00:00.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 27 21:00:00.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9535' Nov 27 21:00:04.776: INFO: stderr: "" Nov 27 21:00:04.776: INFO: stdout: "replicationcontroller/redis-master created\n" Nov 27 21:00:04.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9535' Nov 27 21:00:06.835: INFO: stderr: "" Nov 27 21:00:06.835: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Nov 27 21:00:07.846: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:00:07.849: INFO: Found 0 / 1 Nov 27 21:00:08.846: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:00:08.846: INFO: Found 1 / 1 Nov 27 21:00:08.847: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 27 21:00:08.853: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:00:08.854: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 27 21:00:08.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-pkjgr --namespace=kubectl-9535' Nov 27 21:00:10.198: INFO: stderr: "" Nov 27 21:00:10.198: INFO: stdout: "Name: redis-master-pkjgr\nNamespace: kubectl-9535\nPriority: 0\nNode: iruya-worker2/172.18.0.5\nStart Time: Fri, 27 Nov 2020 21:00:04 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.143\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://e5adea05519c07c9f3c4f522c7eccc4dcfe3b5946e2162f4b6384dcc559d7f68\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 27 Nov 2020 21:00:07 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-hxrx5 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-hxrx5:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-hxrx5\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-9535/redis-master-pkjgr to iruya-worker2\n Normal Pulled 4s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, iruya-worker2 Created container redis-master\n Normal Started 3s kubelet, iruya-worker2 Started container redis-master\n" Nov 27 21:00:10.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9535' Nov 27 21:00:11.635: INFO: stderr: "" Nov 27 21:00:11.636: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9535\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: redis-master-pkjgr\n" Nov 27 21:00:11.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9535' Nov 27 21:00:13.038: INFO: stderr: "" Nov 27 21:00:13.038: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9535\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.19.232\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.143:6379\nSession Affinity: None\nEvents: \n" Nov 27 21:00:13.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Nov 27 21:00:14.467: INFO: stderr: "" Nov 27 21:00:14.467: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 23 Sep 2020 08:25:31 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 27 Nov 2020 20:59:30 +0000 Wed, 23 Sep 2020 08:25:31 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 27 Nov 2020 20:59:30 +0000 Wed, 23 Sep 2020 08:25:31 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 27 Nov 2020 20:59:30 +0000 Wed, 23 Sep 2020 08:25:31 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 27 Nov 2020 20:59:30 +0000 Wed, 23 Sep 2020 08:26:01 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 75bedc8ea3a84920a6257d408ae4fc72\n System UUID: f7c1d795-23db-4f0f-aa92-a051f5bbc85d\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.15.11\n Kube-Proxy Version: v1.15.11\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-5d4dd4b4db-ktm6r 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 65d\n kube-system coredns-5d4dd4b4db-m9gbg 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 65d\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d\n kube-system kindnet-rv6n5 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 65d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 65d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 65d\n kube-system kube-proxy-zcw5n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 65d\n local-path-storage local-path-provisioner-668779bd7-t77bq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 65d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Nov 27 21:00:14.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9535' Nov 27 21:00:15.785: INFO: stderr: "" Nov 27 21:00:15.785: INFO: stdout: "Name: kubectl-9535\nLabels: e2e-framework=kubectl\n e2e-run=e9fa431c-5999-4c55-b468-3c0829a3b1cf\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:00:15.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9535" for this suite. Nov 27 21:00:37.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:00:37.992: INFO: namespace kubectl-9535 deletion completed in 22.198154242s • [SLOW TEST:37.930 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:00:37.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W1127 21:00:38.822389 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 21:00:38.823: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:00:38.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2385" for this suite. Nov 27 21:00:44.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:00:45.079: INFO: namespace gc-2385 deletion completed in 6.249898098s • [SLOW TEST:7.084 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:00:45.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1127 21:01:15.241650 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 21:01:15.242: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:01:15.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8553" for this suite. Nov 27 21:01:21.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:01:21.535: INFO: namespace gc-8553 deletion completed in 6.284419107s • [SLOW TEST:36.454 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:01:21.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-534ec0e9-6458-4d21-9f32-41cdc6f8e70c STEP: Creating a pod to test consume secrets Nov 27 21:01:21.680: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-161329cf-ddf6-4025-8508-b303534be57e" in namespace "projected-2942" to be "success or failure" Nov 27 21:01:21.686: INFO: Pod "pod-projected-secrets-161329cf-ddf6-4025-8508-b303534be57e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.215579ms Nov 27 21:01:23.693: INFO: Pod "pod-projected-secrets-161329cf-ddf6-4025-8508-b303534be57e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012272407s Nov 27 21:01:25.700: INFO: Pod "pod-projected-secrets-161329cf-ddf6-4025-8508-b303534be57e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018811292s STEP: Saw pod success Nov 27 21:01:25.700: INFO: Pod "pod-projected-secrets-161329cf-ddf6-4025-8508-b303534be57e" satisfied condition "success or failure" Nov 27 21:01:25.706: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-161329cf-ddf6-4025-8508-b303534be57e container projected-secret-volume-test: STEP: delete the pod Nov 27 21:01:25.799: INFO: Waiting for pod pod-projected-secrets-161329cf-ddf6-4025-8508-b303534be57e to disappear Nov 27 21:01:25.893: INFO: Pod pod-projected-secrets-161329cf-ddf6-4025-8508-b303534be57e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:01:25.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2942" for this suite. Nov 27 21:01:31.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:01:32.109: INFO: namespace projected-2942 deletion completed in 6.208156389s • [SLOW TEST:10.574 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:01:32.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8185 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8185 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8185 Nov 27 21:01:32.277: INFO: Found 0 stateful pods, waiting for 1 Nov 27 21:01:42.285: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Nov 27 21:01:42.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 27 21:01:44.242: INFO: stderr: "I1127 21:01:43.710107 546 log.go:172] (0x400012ce70) (0x4000672a00) Create stream\nI1127 21:01:43.713666 546 log.go:172] (0x400012ce70) (0x4000672a00) Stream added, broadcasting: 1\nI1127 21:01:43.725568 546 log.go:172] (0x400012ce70) Reply frame received for 1\nI1127 21:01:43.726664 546 log.go:172] (0x400012ce70) (0x4000672aa0) Create stream\nI1127 21:01:43.726750 546 log.go:172] (0x400012ce70) (0x4000672aa0) Stream added, broadcasting: 3\nI1127 21:01:43.728197 546 log.go:172] (0x400012ce70) Reply frame received for 3\nI1127 21:01:43.728432 546 log.go:172] (0x400012ce70) (0x4000940000) Create stream\nI1127 21:01:43.728491 546 log.go:172] (0x400012ce70) (0x4000940000) Stream added, broadcasting: 5\nI1127 21:01:43.729636 546 log.go:172] (0x400012ce70) Reply frame received for 5\nI1127 21:01:44.084645 546 log.go:172] (0x400012ce70) Data frame received for 5\nI1127 21:01:44.084927 546 log.go:172] (0x4000940000) (5) Data frame handling\nI1127 21:01:44.085367 546 log.go:172] (0x4000940000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1127 21:01:44.217303 546 log.go:172] (0x400012ce70) Data frame received for 5\nI1127 21:01:44.217569 546 log.go:172] (0x4000940000) (5) Data frame handling\nI1127 21:01:44.217768 546 log.go:172] (0x400012ce70) Data frame received for 3\nI1127 21:01:44.217962 546 log.go:172] (0x4000672aa0) (3) Data frame handling\nI1127 21:01:44.218185 546 log.go:172] (0x4000672aa0) (3) Data frame sent\nI1127 21:01:44.218370 546 log.go:172] (0x400012ce70) Data frame received for 3\nI1127 21:01:44.218538 546 log.go:172] (0x4000672aa0) (3) Data frame handling\nI1127 21:01:44.219390 546 log.go:172] (0x400012ce70) Data frame received for 1\nI1127 21:01:44.219506 546 log.go:172] (0x4000672a00) (1) Data frame handling\nI1127 21:01:44.219612 546 log.go:172] (0x4000672a00) (1) Data frame sent\nI1127 21:01:44.222310 546 log.go:172] (0x400012ce70) (0x4000672a00) Stream removed, broadcasting: 1\nI1127 21:01:44.222727 546 log.go:172] (0x400012ce70) Go away received\nI1127 21:01:44.226174 546 log.go:172] (0x400012ce70) (0x4000672a00) Stream removed, broadcasting: 1\nI1127 21:01:44.226401 546 log.go:172] (0x400012ce70) (0x4000672aa0) Stream removed, broadcasting: 3\nI1127 21:01:44.226580 546 log.go:172] (0x400012ce70) (0x4000940000) Stream removed, broadcasting: 5\n" Nov 27 21:01:44.243: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 27 21:01:44.243: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 27 21:01:44.249: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 27 21:01:54.256: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 27 21:01:54.257: INFO: Waiting for statefulset status.replicas updated to 0 Nov 27 21:01:54.285: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999951801s Nov 27 21:01:55.291: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987932302s Nov 27 21:01:56.298: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98167341s Nov 27 21:01:57.306: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.974712007s Nov 27 21:01:58.313: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.966461505s Nov 27 21:01:59.320: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.959764655s Nov 27 21:02:00.326: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.953046842s Nov 27 21:02:01.334: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.946460902s Nov 27 21:02:02.342: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.938799393s Nov 27 21:02:03.351: INFO: Verifying statefulset ss doesn't scale past 1 for another 930.663076ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8185 Nov 27 21:02:04.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:02:05.850: INFO: stderr: "I1127 21:02:05.720977 568 log.go:172] (0x40006da000) (0x40009661e0) Create stream\nI1127 21:02:05.723370 568 log.go:172] (0x40006da000) (0x40009661e0) Stream added, broadcasting: 1\nI1127 21:02:05.735234 568 log.go:172] (0x40006da000) Reply frame received for 1\nI1127 21:02:05.736337 568 log.go:172] (0x40006da000) (0x400066e320) Create stream\nI1127 21:02:05.736473 568 log.go:172] (0x40006da000) (0x400066e320) Stream added, broadcasting: 3\nI1127 21:02:05.738580 568 log.go:172] (0x40006da000) Reply frame received for 3\nI1127 21:02:05.739091 568 log.go:172] (0x40006da000) (0x40001f8000) Create stream\nI1127 21:02:05.739204 568 log.go:172] (0x40006da000) (0x40001f8000) Stream added, broadcasting: 5\nI1127 21:02:05.740948 568 log.go:172] (0x40006da000) Reply frame received for 5\nI1127 21:02:05.831015 568 log.go:172] (0x40006da000) Data frame received for 3\nI1127 21:02:05.831310 568 log.go:172] (0x40006da000) Data frame received for 5\nI1127 21:02:05.831477 568 log.go:172] (0x40001f8000) (5) Data frame handling\nI1127 21:02:05.831533 568 log.go:172] (0x40006da000) Data frame received for 1\nI1127 21:02:05.831607 568 log.go:172] (0x40009661e0) (1) Data frame handling\nI1127 21:02:05.831711 568 log.go:172] (0x400066e320) (3) Data frame handling\nI1127 21:02:05.832542 568 log.go:172] (0x400066e320) (3) Data frame sent\nI1127 21:02:05.832715 568 log.go:172] (0x40001f8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1127 21:02:05.833036 568 log.go:172] (0x40009661e0) (1) Data frame sent\nI1127 21:02:05.833119 568 log.go:172] (0x40006da000) Data frame received for 3\nI1127 21:02:05.833181 568 log.go:172] (0x400066e320) (3) Data frame handling\nI1127 21:02:05.833866 568 log.go:172] (0x40006da000) (0x40009661e0) Stream removed, broadcasting: 1\nI1127 21:02:05.834034 568 log.go:172] (0x40006da000) Data frame received for 5\nI1127 21:02:05.836377 568 log.go:172] (0x40001f8000) (5) Data frame handling\nI1127 21:02:05.837377 568 log.go:172] (0x40006da000) Go away received\nI1127 21:02:05.839503 568 log.go:172] (0x40006da000) (0x40009661e0) Stream removed, broadcasting: 1\nI1127 21:02:05.839873 568 log.go:172] (0x40006da000) (0x400066e320) Stream removed, broadcasting: 3\nI1127 21:02:05.840140 568 log.go:172] (0x40006da000) (0x40001f8000) Stream removed, broadcasting: 5\n" Nov 27 21:02:05.852: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Nov 27 21:02:05.852: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Nov 27 21:02:05.858: INFO: Found 1 stateful pods, waiting for 3 Nov 27 21:02:15.868: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 27 21:02:15.868: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 27 21:02:15.868: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Nov 27 21:02:15.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 27 21:02:17.388: INFO: stderr: "I1127 21:02:17.249961 591 log.go:172] (0x4000138c60) (0x4000922140) Create stream\nI1127 21:02:17.255880 591 log.go:172] (0x4000138c60) (0x4000922140) Stream added, broadcasting: 1\nI1127 21:02:17.264953 591 log.go:172] (0x4000138c60) Reply frame received for 1\nI1127 21:02:17.265501 591 log.go:172] (0x4000138c60) (0x400093a000) Create stream\nI1127 21:02:17.265564 591 log.go:172] (0x4000138c60) (0x400093a000) Stream added, broadcasting: 3\nI1127 21:02:17.267062 591 log.go:172] (0x4000138c60) Reply frame received for 3\nI1127 21:02:17.267589 591 log.go:172] (0x4000138c60) (0x400067a1e0) Create stream\nI1127 21:02:17.267740 591 log.go:172] (0x4000138c60) (0x400067a1e0) Stream added, broadcasting: 5\nI1127 21:02:17.269390 591 log.go:172] (0x4000138c60) Reply frame received for 5\nI1127 21:02:17.363689 591 log.go:172] (0x4000138c60) Data frame received for 5\nI1127 21:02:17.364027 591 log.go:172] (0x4000138c60) Data frame received for 3\nI1127 21:02:17.364331 591 log.go:172] (0x4000138c60) Data frame received for 1\nI1127 21:02:17.364466 591 log.go:172] (0x4000922140) (1) Data frame handling\nI1127 21:02:17.364587 591 log.go:172] (0x400067a1e0) (5) Data frame handling\nI1127 21:02:17.364806 591 log.go:172] (0x400093a000) (3) Data frame handling\nI1127 21:02:17.365897 591 log.go:172] (0x400093a000) (3) Data frame sent\nI1127 21:02:17.365993 591 log.go:172] (0x400067a1e0) (5) Data frame sent\nI1127 21:02:17.366232 591 log.go:172] (0x4000922140) (1) Data frame sent\nI1127 21:02:17.366857 591 log.go:172] (0x4000138c60) Data frame received for 3\nI1127 21:02:17.366986 591 log.go:172] (0x400093a000) (3) Data frame handling\nI1127 21:02:17.367179 591 log.go:172] (0x4000138c60) Data frame received for 5\nI1127 21:02:17.367284 591 log.go:172] (0x400067a1e0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1127 21:02:17.370065 591 log.go:172] (0x4000138c60) (0x4000922140) Stream removed, broadcasting: 1\nI1127 21:02:17.371584 591 log.go:172] (0x4000138c60) Go away received\nI1127 21:02:17.375667 591 log.go:172] (0x4000138c60) (0x4000922140) Stream removed, broadcasting: 1\nI1127 21:02:17.376414 591 log.go:172] (0x4000138c60) (0x400093a000) Stream removed, broadcasting: 3\nI1127 21:02:17.376722 591 log.go:172] (0x4000138c60) (0x400067a1e0) Stream removed, broadcasting: 5\n" Nov 27 21:02:17.389: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 27 21:02:17.389: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 27 21:02:17.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 27 21:02:18.943: INFO: stderr: "I1127 21:02:18.774896 615 log.go:172] (0x40005f6000) (0x400065e460) Create stream\nI1127 21:02:18.778175 615 log.go:172] (0x40005f6000) (0x400065e460) Stream added, broadcasting: 1\nI1127 21:02:18.791983 615 log.go:172] (0x40005f6000) Reply frame received for 1\nI1127 21:02:18.793543 615 log.go:172] (0x40005f6000) (0x400002a000) Create stream\nI1127 21:02:18.793685 615 log.go:172] (0x40005f6000) (0x400002a000) Stream added, broadcasting: 3\nI1127 21:02:18.795969 615 log.go:172] (0x40005f6000) Reply frame received for 3\nI1127 21:02:18.796342 615 log.go:172] (0x40005f6000) (0x4000200000) Create stream\nI1127 21:02:18.796415 615 log.go:172] (0x40005f6000) (0x4000200000) Stream added, broadcasting: 5\nI1127 21:02:18.798024 615 log.go:172] (0x40005f6000) Reply frame received for 5\nI1127 21:02:18.891052 615 log.go:172] (0x40005f6000) Data frame received for 5\nI1127 21:02:18.891242 615 log.go:172] (0x4000200000) (5) Data frame handling\nI1127 21:02:18.891586 615 log.go:172] (0x4000200000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1127 21:02:18.919240 615 log.go:172] (0x40005f6000) Data frame received for 3\nI1127 21:02:18.919464 615 log.go:172] (0x400002a000) (3) Data frame handling\nI1127 21:02:18.919588 615 log.go:172] (0x40005f6000) Data frame received for 5\nI1127 21:02:18.919813 615 log.go:172] (0x4000200000) (5) Data frame handling\nI1127 21:02:18.919900 615 log.go:172] (0x400002a000) (3) Data frame sent\nI1127 21:02:18.920017 615 log.go:172] (0x40005f6000) Data frame received for 3\nI1127 21:02:18.920110 615 log.go:172] (0x400002a000) (3) Data frame handling\nI1127 21:02:18.921387 615 log.go:172] (0x40005f6000) Data frame received for 1\nI1127 21:02:18.921505 615 log.go:172] (0x400065e460) (1) Data frame handling\nI1127 21:02:18.921633 615 log.go:172] (0x400065e460) (1) Data frame sent\nI1127 21:02:18.922845 615 log.go:172] (0x40005f6000) (0x400065e460) Stream removed, broadcasting: 1\nI1127 21:02:18.928036 615 log.go:172] (0x40005f6000) Go away received\nI1127 21:02:18.930732 615 log.go:172] (0x40005f6000) (0x400065e460) Stream removed, broadcasting: 1\nI1127 21:02:18.931605 615 log.go:172] (0x40005f6000) (0x400002a000) Stream removed, broadcasting: 3\nI1127 21:02:18.932264 615 log.go:172] (0x40005f6000) (0x4000200000) Stream removed, broadcasting: 5\n" Nov 27 21:02:18.944: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 27 21:02:18.944: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 27 21:02:18.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 27 21:02:20.511: INFO: stderr: "I1127 21:02:20.349565 638 log.go:172] (0x4000136fd0) (0x40005a06e0) Create stream\nI1127 21:02:20.352791 638 log.go:172] (0x4000136fd0) (0x40005a06e0) Stream added, broadcasting: 1\nI1127 21:02:20.365507 638 log.go:172] (0x4000136fd0) Reply frame received for 1\nI1127 21:02:20.366161 638 log.go:172] (0x4000136fd0) (0x40007ca000) Create stream\nI1127 21:02:20.366237 638 log.go:172] (0x4000136fd0) (0x40007ca000) Stream added, broadcasting: 3\nI1127 21:02:20.367810 638 log.go:172] (0x4000136fd0) Reply frame received for 3\nI1127 21:02:20.368141 638 log.go:172] (0x4000136fd0) (0x400086c000) Create stream\nI1127 21:02:20.368224 638 log.go:172] (0x4000136fd0) (0x400086c000) Stream added, broadcasting: 5\nI1127 21:02:20.370033 638 log.go:172] (0x4000136fd0) Reply frame received for 5\nI1127 21:02:20.439833 638 log.go:172] (0x4000136fd0) Data frame received for 5\nI1127 21:02:20.440110 638 log.go:172] (0x400086c000) (5) Data frame handling\nI1127 21:02:20.440591 638 log.go:172] (0x400086c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1127 21:02:20.483853 638 log.go:172] (0x4000136fd0) Data frame received for 3\nI1127 21:02:20.484093 638 log.go:172] (0x40007ca000) (3) Data frame handling\nI1127 21:02:20.484294 638 log.go:172] (0x4000136fd0) Data frame received for 5\nI1127 21:02:20.484466 638 log.go:172] (0x400086c000) (5) Data frame handling\nI1127 21:02:20.484654 638 log.go:172] (0x40007ca000) (3) Data frame sent\nI1127 21:02:20.484766 638 log.go:172] (0x4000136fd0) Data frame received for 3\nI1127 21:02:20.484915 638 log.go:172] (0x40007ca000) (3) Data frame handling\nI1127 21:02:20.486083 638 log.go:172] (0x4000136fd0) Data frame received for 1\nI1127 21:02:20.486205 638 log.go:172] (0x40005a06e0) (1) Data frame handling\nI1127 21:02:20.486303 638 log.go:172] (0x40005a06e0) (1) Data frame sent\nI1127 21:02:20.488339 638 log.go:172] (0x4000136fd0) (0x40005a06e0) Stream removed, broadcasting: 1\nI1127 21:02:20.492211 638 log.go:172] (0x4000136fd0) Go away received\nI1127 21:02:20.500302 638 log.go:172] (0x4000136fd0) (0x40005a06e0) Stream removed, broadcasting: 1\nI1127 21:02:20.501810 638 log.go:172] (0x4000136fd0) (0x40007ca000) Stream removed, broadcasting: 3\nI1127 21:02:20.502026 638 log.go:172] (0x4000136fd0) (0x400086c000) Stream removed, broadcasting: 5\n" Nov 27 21:02:20.512: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 27 21:02:20.512: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 27 21:02:20.512: INFO: Waiting for statefulset status.replicas updated to 0 Nov 27 21:02:20.523: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 27 21:02:30.537: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 27 21:02:30.538: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 27 21:02:30.538: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 27 21:02:30.585: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999993642s Nov 27 21:02:31.593: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.961012279s Nov 27 21:02:32.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.953492369s Nov 27 21:02:33.624: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.930084451s Nov 27 21:02:34.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.921746324s Nov 27 21:02:35.642: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.912565708s Nov 27 21:02:36.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.904329576s Nov 27 21:02:37.663: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.892991268s Nov 27 21:02:38.672: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.882829807s Nov 27 21:02:39.680: INFO: Verifying statefulset ss doesn't scale past 3 for another 873.818801ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8185 Nov 27 21:02:40.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:02:42.197: INFO: stderr: "I1127 21:02:42.074762 660 log.go:172] (0x40005ea630) (0x40004e66e0) Create stream\nI1127 21:02:42.077423 660 log.go:172] (0x40005ea630) (0x40004e66e0) Stream added, broadcasting: 1\nI1127 21:02:42.087416 660 log.go:172] (0x40005ea630) Reply frame received for 1\nI1127 21:02:42.088307 660 log.go:172] (0x40005ea630) (0x40005d01e0) Create stream\nI1127 21:02:42.088369 660 log.go:172] (0x40005ea630) (0x40005d01e0) Stream added, broadcasting: 3\nI1127 21:02:42.090365 660 log.go:172] (0x40005ea630) Reply frame received for 3\nI1127 21:02:42.090892 660 log.go:172] (0x40005ea630) (0x40004e6780) Create stream\nI1127 21:02:42.090990 660 log.go:172] (0x40005ea630) (0x40004e6780) Stream added, broadcasting: 5\nI1127 21:02:42.093057 660 log.go:172] (0x40005ea630) Reply frame received for 5\nI1127 21:02:42.175772 660 log.go:172] (0x40005ea630) Data frame received for 5\nI1127 21:02:42.176124 660 log.go:172] (0x40005ea630) Data frame received for 3\nI1127 21:02:42.176345 660 log.go:172] (0x40004e6780) (5) Data frame handling\nI1127 21:02:42.176512 660 log.go:172] (0x40005ea630) Data frame received for 1\nI1127 21:02:42.176578 660 log.go:172] (0x40004e66e0) (1) Data frame handling\nI1127 21:02:42.176716 660 log.go:172] (0x40005d01e0) (3) Data frame handling\nI1127 21:02:42.177565 660 log.go:172] (0x40004e66e0) (1) Data frame sent\nI1127 21:02:42.177726 660 log.go:172] (0x40004e6780) (5) Data frame sent\nI1127 21:02:42.177880 660 log.go:172] (0x40005d01e0) (3) Data frame sent\nI1127 21:02:42.178084 660 log.go:172] (0x40005ea630) Data frame received for 3\nI1127 21:02:42.178180 660 log.go:172] (0x40005d01e0) (3) Data frame handling\nI1127 21:02:42.178515 660 log.go:172] (0x40005ea630) Data frame received for 5\nI1127 21:02:42.178641 660 log.go:172] (0x40004e6780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1127 21:02:42.181137 660 log.go:172] (0x40005ea630) (0x40004e66e0) Stream removed, broadcasting: 1\nI1127 21:02:42.186371 660 log.go:172] (0x40005ea630) Go away received\nI1127 21:02:42.187235 660 log.go:172] (0x40005ea630) (0x40004e66e0) Stream removed, broadcasting: 1\nI1127 21:02:42.188655 660 log.go:172] (0x40005ea630) (0x40005d01e0) Stream removed, broadcasting: 3\nI1127 21:02:42.189092 660 log.go:172] (0x40005ea630) (0x40004e6780) Stream removed, broadcasting: 5\n" Nov 27 21:02:42.198: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Nov 27 21:02:42.198: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Nov 27 21:02:42.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:02:43.659: INFO: stderr: "I1127 21:02:43.535796 683 log.go:172] (0x4000762630) (0x40007bc820) Create stream\nI1127 21:02:43.542407 683 log.go:172] (0x4000762630) (0x40007bc820) Stream added, broadcasting: 1\nI1127 21:02:43.559893 683 log.go:172] (0x4000762630) Reply frame received for 1\nI1127 21:02:43.560717 683 log.go:172] (0x4000762630) (0x40007bc000) Create stream\nI1127 21:02:43.560890 683 log.go:172] (0x4000762630) (0x40007bc000) Stream added, broadcasting: 3\nI1127 21:02:43.562311 683 log.go:172] (0x4000762630) Reply frame received for 3\nI1127 21:02:43.562609 683 log.go:172] (0x4000762630) (0x40007bc0a0) Create stream\nI1127 21:02:43.562671 683 log.go:172] (0x4000762630) (0x40007bc0a0) Stream added, broadcasting: 5\nI1127 21:02:43.563818 683 log.go:172] (0x4000762630) Reply frame received for 5\nI1127 21:02:43.638244 683 log.go:172] (0x4000762630) Data frame received for 3\nI1127 21:02:43.638607 683 log.go:172] (0x4000762630) Data frame received for 5\nI1127 21:02:43.638790 683 log.go:172] (0x40007bc0a0) (5) Data frame handling\nI1127 21:02:43.638894 683 log.go:172] (0x4000762630) Data frame received for 1\nI1127 21:02:43.639014 683 log.go:172] (0x40007bc820) (1) Data frame handling\nI1127 21:02:43.639132 683 log.go:172] (0x40007bc000) (3) Data frame handling\nI1127 21:02:43.639954 683 log.go:172] (0x40007bc820) (1) Data frame sent\nI1127 21:02:43.640447 683 log.go:172] (0x40007bc0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1127 21:02:43.641825 683 log.go:172] (0x40007bc000) (3) Data frame sent\nI1127 21:02:43.641903 683 log.go:172] (0x4000762630) Data frame received for 3\nI1127 21:02:43.641952 683 log.go:172] (0x40007bc000) (3) Data frame handling\nI1127 21:02:43.643425 683 log.go:172] (0x4000762630) (0x40007bc820) Stream removed, broadcasting: 1\nI1127 21:02:43.644435 683 log.go:172] (0x4000762630) Data frame received for 5\nI1127 21:02:43.644514 683 log.go:172] (0x40007bc0a0) (5) Data frame handling\nI1127 21:02:43.645088 683 log.go:172] (0x4000762630) Go away received\nI1127 21:02:43.646956 683 log.go:172] (0x4000762630) (0x40007bc820) Stream removed, broadcasting: 1\nI1127 21:02:43.647442 683 log.go:172] (0x4000762630) (0x40007bc000) Stream removed, broadcasting: 3\nI1127 21:02:43.647944 683 log.go:172] (0x4000762630) (0x40007bc0a0) Stream removed, broadcasting: 5\n" Nov 27 21:02:43.660: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Nov 27 21:02:43.660: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Nov 27 21:02:43.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:02:45.030: INFO: rc: 1 Nov 27 21:02:45.032: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0x40019a7c50 exit status 1 true [0x4000946e10 0x4000946e70 0x4000946ea0] [0x4000946e10 0x4000946e70 0x4000946ea0] [0x4000946e68 0x4000946e80] [0xad5158 0xad5158] 0x4002ae79e0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Nov 27 21:02:55.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:02:56.336: INFO: rc: 1 Nov 27 21:02:56.336: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x400152de30 exit status 1 true [0x4000739658 0x40007398e8 0x40007399f0] [0x4000739658 0x40007398e8 0x40007399f0] [0x4000739868 0x4000739988] [0xad5158 0xad5158] 0x4002aa0120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:03:06.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:03:07.642: INFO: rc: 1 Nov 27 21:03:07.642: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x400152def0 exit status 1 true [0x4000739a10 0x4000739be8 0x4000739ce0] [0x4000739a10 0x4000739be8 0x4000739ce0] [0x4000739ae0 0x4000739cb0] [0xad5158 0xad5158] 0x4002aa0480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:03:17.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:03:18.899: INFO: rc: 1 Nov 27 21:03:18.900: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x400152dfb0 exit status 1 true [0x4000739d58 0x4000739e30 0x4000739e88] [0x4000739d58 0x4000739e30 0x4000739e88] [0x4000739de0 0x4000739e50] [0xad5158 0xad5158] 0x4002aa09c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:03:28.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:03:30.186: INFO: rc: 1 Nov 27 21:03:30.187: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4000d73230 exit status 1 true [0x400070fc60 0x400070fca8 0x400070fd50] [0x400070fc60 0x400070fca8 0x400070fd50] [0x400070fc88 0x400070fd40] [0xad5158 0xad5158] 0x4002018c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:03:40.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:03:41.474: INFO: rc: 1 Nov 27 21:03:41.475: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4000d73320 exit status 1 true [0x400070fd60 0x400070fdd8 0x400070fe58] [0x400070fd60 0x400070fdd8 0x400070fe58] [0x400070fdc8 0x400070fe28] [0xad5158 0xad5158] 0x4002018fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:03:51.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:03:52.773: INFO: rc: 1 Nov 27 21:03:52.773: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40019a7d70 exit status 1 true [0x4000946eb8 0x4000946f20 0x4000946f88] [0x4000946eb8 0x4000946f20 0x4000946f88] [0x4000946ed8 0x4000946f68] [0xad5158 0xad5158] 0x4002ae7ec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:04:02.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:04:04.033: INFO: rc: 1 Nov 27 21:04:04.034: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40019a7e30 exit status 1 true [0x4000946f98 0x4000946fe0 0x40009470a8] [0x4000946f98 0x4000946fe0 0x40009470a8] [0x4000946fb0 0x4000947068] [0xad5158 0xad5158] 0x4002b5c900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:04:14.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:04:15.335: INFO: rc: 1 Nov 27 21:04:15.335: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40024bb740 exit status 1 true [0x40009ac0d8 0x40009ace88 0x40009ad0a8] [0x40009ac0d8 0x40009ace88 0x40009ad0a8] [0x40009ace00 0x40009acf38] [0xad5158 0xad5158] 0x4001adc900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:04:25.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:04:26.574: INFO: rc: 1 Nov 27 21:04:26.574: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40019a60c0 exit status 1 true [0x4000946080 0x40009461c0 0x4000946298] [0x4000946080 0x40009461c0 0x4000946298] [0x40009460e0 0x4000946278] [0xad5158 0xad5158] 0x4002bfa900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:04:36.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:04:37.870: INFO: rc: 1 Nov 27 21:04:37.870: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40019a61b0 exit status 1 true [0x40009462b0 0x4000946410 0x4000946608] [0x40009462b0 0x4000946410 0x4000946608] [0x4000946378 0x4000946588] [0xad5158 0xad5158] 0x4002bfac60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:04:47.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:04:49.177: INFO: rc: 1 Nov 27 21:04:49.178: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40017de0f0 exit status 1 true [0x400070ef28 0x400070f038 0x400070f1b8] [0x400070ef28 0x400070f038 0x400070f1b8] [0x400070f008 0x400070f0c8] [0xad5158 0xad5158] 0x4002ae6780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:04:59.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:05:00.400: INFO: rc: 1 Nov 27 21:05:00.400: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40017de1b0 exit status 1 true [0x400070f1e8 0x400070f2a8 0x400070f418] [0x400070f1e8 0x400070f2a8 0x400070f418] [0x400070f278 0x400070f330] [0xad5158 0xad5158] 0x4002ae6c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:05:10.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:05:11.735: INFO: rc: 1 Nov 27 21:05:11.735: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40008a20f0 exit status 1 true [0x40009ac4d8 0x40009ad2c8 0x40009ad838] [0x40009ac4d8 0x40009ad2c8 0x40009ad838] [0x40009ad200 0x40009ad788] [0xad5158 0xad5158] 0x4002b5cba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:05:21.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:05:22.993: INFO: rc: 1 Nov 27 21:05:22.994: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4000d720c0 exit status 1 true [0x4000738088 0x4000738280 0x40007383d0] [0x4000738088 0x4000738280 0x40007383d0] [0x40007381c0 0x4000738360] [0xad5158 0xad5158] 0x40020183c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:05:32.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:05:34.252: INFO: rc: 1 Nov 27 21:05:34.252: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40008a21e0 exit status 1 true [0x40009ad980 0x40009ada30 0x40009adbe0] [0x40009ad980 0x40009ada30 0x40009adbe0] [0x40009ad9d8 0x40009adb88] [0xad5158 0xad5158] 0x4002b5db60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:05:44.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:05:45.517: INFO: rc: 1 Nov 27 21:05:45.517: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40019a6270 exit status 1 true [0x4000946628 0x4000946668 0x4000946818] [0x4000946628 0x4000946668 0x4000946818] [0x4000946658 0x40009467c0] [0xad5158 0xad5158] 0x4002bfafc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:05:55.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:05:56.797: INFO: rc: 1 Nov 27 21:05:56.798: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40019a6330 exit status 1 true [0x4000946850 0x4000946950 0x4000946a70] [0x4000946850 0x4000946950 0x4000946a70] [0x4000946920 0x4000946a58] [0xad5158 0xad5158] 0x4002bfb320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:06:06.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:06:08.056: INFO: rc: 1 Nov 27 21:06:08.056: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40017de300 exit status 1 true [0x400070f448 0x400070f640 0x400070f780] [0x400070f448 0x400070f640 0x400070f780] [0x400070f528 0x400070f760] [0xad5158 0xad5158] 0x4002ae7020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:06:18.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:06:19.373: INFO: rc: 1 Nov 27 21:06:19.374: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40017de0c0 exit status 1 true [0x400070eff0 0x400070f078 0x400070f1e8] [0x400070eff0 0x400070f078 0x400070f1e8] [0x400070f038 0x400070f1b8] [0xad5158 0xad5158] 0x4002ae6780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:06:29.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:06:30.627: INFO: rc: 1 Nov 27 21:06:30.627: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40017de1e0 exit status 1 true [0x400070f238 0x400070f2d0 0x400070f448] [0x400070f238 0x400070f2d0 0x400070f448] [0x400070f2a8 0x400070f418] [0xad5158 0xad5158] 0x4002ae6c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:06:40.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:06:41.859: INFO: rc: 1 Nov 27 21:06:41.860: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40017de2d0 exit status 1 true [0x400070f478 0x400070f6b0 0x400070f7b8] [0x400070f478 0x400070f6b0 0x400070f7b8] [0x400070f640 0x400070f780] [0xad5158 0xad5158] 0x4002ae7020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:06:51.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:06:53.130: INFO: rc: 1 Nov 27 21:06:53.130: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40017de3f0 exit status 1 true [0x400070f818 0x400070f970 0x400070fa40] [0x400070f818 0x400070f970 0x400070fa40] [0x400070f8d8 0x400070fa20] [0xad5158 0xad5158] 0x4002ae7380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:07:03.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:07:04.372: INFO: rc: 1 Nov 27 21:07:04.372: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4000d720f0 exit status 1 true [0x4000946030 0x40009460e0 0x4000946278] [0x4000946030 0x40009460e0 0x4000946278] [0x40009460c0 0x4000946220] [0xad5158 0xad5158] 0x4002bfa900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:07:14.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:07:15.644: INFO: rc: 1 Nov 27 21:07:15.645: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40017de4b0 exit status 1 true [0x400070fa80 0x400070fb40 0x400070fbc0] [0x400070fa80 0x400070fb40 0x400070fbc0] [0x400070fb30 0x400070fb90] [0xad5158 0xad5158] 0x4002ae76e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:07:25.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:07:26.896: INFO: rc: 1 Nov 27 21:07:26.896: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x40019a6120 exit status 1 true [0x4000738088 0x4000738280 0x40007383d0] [0x4000738088 0x4000738280 0x40007383d0] [0x40007381c0 0x4000738360] [0xad5158 0xad5158] 0x40020183c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:07:36.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:07:38.153: INFO: rc: 1 Nov 27 21:07:38.154: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0x4000d721b0 exit status 1 true [0x4000946298 0x4000946378 0x4000946588] [0x4000946298 0x4000946378 0x4000946588] [0x40009462f8 0x4000946528] [0xad5158 0xad5158] 0x4002bfac60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 27 21:07:48.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8185 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:07:49.415: INFO: rc: 1 Nov 27 21:07:49.416: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Nov 27 21:07:49.416: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Nov 27 21:07:49.438: INFO: Deleting all statefulset in ns statefulset-8185 Nov 27 21:07:49.443: INFO: Scaling statefulset ss to 0 Nov 27 21:07:49.454: INFO: Waiting for statefulset status.replicas updated to 0 Nov 27 21:07:49.457: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:07:49.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8185" for this suite. Nov 27 21:07:55.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:07:55.701: INFO: namespace statefulset-8185 deletion completed in 6.19352175s • [SLOW TEST:383.588 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:07:55.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:08:29.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2871" for this suite. Nov 27 21:08:35.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:08:35.934: INFO: namespace container-runtime-2871 deletion completed in 6.178743855s • [SLOW TEST:40.230 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:08:35.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Nov 27 21:08:36.084: INFO: Waiting up to 5m0s for pod "downward-api-f6cdd624-9525-47a2-836e-c1e5c078e12c" in namespace "downward-api-9472" to be "success or failure" Nov 27 21:08:36.130: INFO: Pod "downward-api-f6cdd624-9525-47a2-836e-c1e5c078e12c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.222257ms Nov 27 21:08:38.136: INFO: Pod "downward-api-f6cdd624-9525-47a2-836e-c1e5c078e12c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051804866s Nov 27 21:08:40.143: INFO: Pod "downward-api-f6cdd624-9525-47a2-836e-c1e5c078e12c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058564047s STEP: Saw pod success Nov 27 21:08:40.143: INFO: Pod "downward-api-f6cdd624-9525-47a2-836e-c1e5c078e12c" satisfied condition "success or failure" Nov 27 21:08:40.147: INFO: Trying to get logs from node iruya-worker2 pod downward-api-f6cdd624-9525-47a2-836e-c1e5c078e12c container dapi-container: STEP: delete the pod Nov 27 21:08:40.177: INFO: Waiting for pod downward-api-f6cdd624-9525-47a2-836e-c1e5c078e12c to disappear Nov 27 21:08:40.181: INFO: Pod downward-api-f6cdd624-9525-47a2-836e-c1e5c078e12c no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:08:40.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9472" for this suite. Nov 27 21:08:46.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:08:46.413: INFO: namespace downward-api-9472 deletion completed in 6.223216369s • [SLOW TEST:10.477 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:08:46.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-ae1a10d4-9817-4405-aa89-a325a05d57bd STEP: Creating a pod to test consume secrets Nov 27 21:08:46.511: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b0fcefcb-d91d-417e-a84c-6a02881ea4d7" in namespace "projected-5542" to be "success or failure" Nov 27 21:08:46.564: INFO: Pod "pod-projected-secrets-b0fcefcb-d91d-417e-a84c-6a02881ea4d7": Phase="Pending", Reason="", readiness=false. Elapsed: 52.201939ms Nov 27 21:08:48.571: INFO: Pod "pod-projected-secrets-b0fcefcb-d91d-417e-a84c-6a02881ea4d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059675746s Nov 27 21:08:50.578: INFO: Pod "pod-projected-secrets-b0fcefcb-d91d-417e-a84c-6a02881ea4d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066648961s STEP: Saw pod success Nov 27 21:08:50.578: INFO: Pod "pod-projected-secrets-b0fcefcb-d91d-417e-a84c-6a02881ea4d7" satisfied condition "success or failure" Nov 27 21:08:50.584: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-b0fcefcb-d91d-417e-a84c-6a02881ea4d7 container projected-secret-volume-test: STEP: delete the pod Nov 27 21:08:50.606: INFO: Waiting for pod pod-projected-secrets-b0fcefcb-d91d-417e-a84c-6a02881ea4d7 to disappear Nov 27 21:08:50.615: INFO: Pod pod-projected-secrets-b0fcefcb-d91d-417e-a84c-6a02881ea4d7 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:08:50.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5542" for this suite. Nov 27 21:08:56.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:08:56.842: INFO: namespace projected-5542 deletion completed in 6.218498678s • [SLOW TEST:10.429 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:08:56.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Nov 27 21:08:56.964: INFO: Waiting up to 5m0s for pod "var-expansion-50a4bdf5-d28f-4383-96fe-1cb69fd81d85" in namespace "var-expansion-7463" to be "success or failure" Nov 27 21:08:56.982: INFO: Pod "var-expansion-50a4bdf5-d28f-4383-96fe-1cb69fd81d85": Phase="Pending", Reason="", readiness=false. Elapsed: 17.979926ms Nov 27 21:08:58.989: INFO: Pod "var-expansion-50a4bdf5-d28f-4383-96fe-1cb69fd81d85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025501466s Nov 27 21:09:00.996: INFO: Pod "var-expansion-50a4bdf5-d28f-4383-96fe-1cb69fd81d85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032266498s STEP: Saw pod success Nov 27 21:09:00.997: INFO: Pod "var-expansion-50a4bdf5-d28f-4383-96fe-1cb69fd81d85" satisfied condition "success or failure" Nov 27 21:09:01.001: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-50a4bdf5-d28f-4383-96fe-1cb69fd81d85 container dapi-container: STEP: delete the pod Nov 27 21:09:01.043: INFO: Waiting for pod var-expansion-50a4bdf5-d28f-4383-96fe-1cb69fd81d85 to disappear Nov 27 21:09:01.053: INFO: Pod var-expansion-50a4bdf5-d28f-4383-96fe-1cb69fd81d85 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:09:01.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7463" for this suite. Nov 27 21:09:07.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:09:07.239: INFO: namespace var-expansion-7463 deletion completed in 6.176654973s • [SLOW TEST:10.395 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:09:07.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Nov 27 21:09:07.436: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:09:07.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-614" for this suite. Nov 27 21:09:13.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:09:14.536: INFO: namespace replication-controller-614 deletion completed in 7.005025734s • [SLOW TEST:7.296 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:09:14.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Nov 27 21:09:19.339: INFO: Successfully updated pod "annotationupdate9a9f2f2e-f826-4030-8a38-7c0b69614ec7" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:09:21.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2589" for this suite. Nov 27 21:09:43.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:09:43.625: INFO: namespace downward-api-2589 deletion completed in 22.240710977s • [SLOW TEST:29.085 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:09:43.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-p8jv STEP: Creating a pod to test atomic-volume-subpath Nov 27 21:09:43.788: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p8jv" in namespace "subpath-8768" to be "success or failure" Nov 27 21:09:43.809: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Pending", Reason="", readiness=false. Elapsed: 21.29389ms Nov 27 21:09:45.840: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052027154s Nov 27 21:09:47.846: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Running", Reason="", readiness=true. Elapsed: 4.057903434s Nov 27 21:09:49.853: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Running", Reason="", readiness=true. Elapsed: 6.064697502s Nov 27 21:09:51.858: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Running", Reason="", readiness=true. Elapsed: 8.069955826s Nov 27 21:09:53.864: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Running", Reason="", readiness=true. Elapsed: 10.075904437s Nov 27 21:09:55.871: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Running", Reason="", readiness=true. Elapsed: 12.082604256s Nov 27 21:09:57.878: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Running", Reason="", readiness=true. Elapsed: 14.089974502s Nov 27 21:09:59.884: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Running", Reason="", readiness=true. Elapsed: 16.096325634s Nov 27 21:10:01.891: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Running", Reason="", readiness=true. Elapsed: 18.103289165s Nov 27 21:10:03.898: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Running", Reason="", readiness=true. Elapsed: 20.110545528s Nov 27 21:10:05.905: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Running", Reason="", readiness=true. Elapsed: 22.117277999s Nov 27 21:10:07.918: INFO: Pod "pod-subpath-test-configmap-p8jv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.130520572s STEP: Saw pod success Nov 27 21:10:07.919: INFO: Pod "pod-subpath-test-configmap-p8jv" satisfied condition "success or failure" Nov 27 21:10:07.924: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-p8jv container test-container-subpath-configmap-p8jv: STEP: delete the pod Nov 27 21:10:07.951: INFO: Waiting for pod pod-subpath-test-configmap-p8jv to disappear Nov 27 21:10:07.955: INFO: Pod pod-subpath-test-configmap-p8jv no longer exists STEP: Deleting pod pod-subpath-test-configmap-p8jv Nov 27 21:10:07.955: INFO: Deleting pod "pod-subpath-test-configmap-p8jv" in namespace "subpath-8768" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:10:07.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8768" for this suite. Nov 27 21:10:13.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:10:14.149: INFO: namespace subpath-8768 deletion completed in 6.183731049s • [SLOW TEST:30.523 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:10:14.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:10:14.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3406" for this suite. Nov 27 21:10:36.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:10:36.503: INFO: namespace pods-3406 deletion completed in 22.188368504s • [SLOW TEST:22.351 seconds] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:10:36.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Nov 27 21:10:36.591: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 27 21:10:36.619: INFO: Waiting for terminating namespaces to be deleted... Nov 27 21:10:36.627: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Nov 27 21:10:36.642: INFO: kindnet-7bsvw from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 27 21:10:36.643: INFO: Container kindnet-cni ready: true, restart count 0 Nov 27 21:10:36.643: INFO: chaos-controller-manager-6c68f56f79-dmwmx from default started at 2020-11-23 00:43:52 +0000 UTC (1 container statuses recorded) Nov 27 21:10:36.643: INFO: Container chaos-mesh ready: true, restart count 0 Nov 27 21:10:36.644: INFO: chaos-daemon-m4wrh from default started at 2020-11-23 00:43:52 +0000 UTC (1 container statuses recorded) Nov 27 21:10:36.644: INFO: Container chaos-daemon ready: true, restart count 0 Nov 27 21:10:36.644: INFO: kube-proxy-mtljr from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 27 21:10:36.644: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 21:10:36.644: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Nov 27 21:10:36.695: INFO: kindnet-djqgh from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 27 21:10:36.695: INFO: Container kindnet-cni ready: true, restart count 0 Nov 27 21:10:36.695: INFO: chaos-daemon-fcg7h from default started at 2020-11-23 00:43:52 +0000 UTC (1 container statuses recorded) Nov 27 21:10:36.695: INFO: Container chaos-daemon ready: true, restart count 0 Nov 27 21:10:36.695: INFO: kube-proxy-52wt5 from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 27 21:10:36.695: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3b647c69-c13e-40d7-a530-ece07fa37d98 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3b647c69-c13e-40d7-a530-ece07fa37d98 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-3b647c69-c13e-40d7-a530-ece07fa37d98 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:10:44.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4774" for this suite. Nov 27 21:10:54.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:10:55.076: INFO: namespace sched-pred-4774 deletion completed in 10.16487387s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:18.573 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:10:55.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5692 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Nov 27 21:10:55.177: INFO: Found 0 stateful pods, waiting for 3 Nov 27 21:11:05.187: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 27 21:11:05.187: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 27 21:11:05.187: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Nov 27 21:11:05.228: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Nov 27 21:11:15.326: INFO: Updating stateful set ss2 Nov 27 21:11:15.393: INFO: Waiting for Pod statefulset-5692/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Nov 27 21:11:25.555: INFO: Found 2 stateful pods, waiting for 3 Nov 27 21:11:35.564: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 27 21:11:35.564: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 27 21:11:35.564: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Nov 27 21:11:35.596: INFO: Updating stateful set ss2 Nov 27 21:11:35.613: INFO: Waiting for Pod statefulset-5692/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Nov 27 21:11:45.632: INFO: Waiting for Pod statefulset-5692/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Nov 27 21:11:55.652: INFO: Updating stateful set ss2 Nov 27 21:11:55.664: INFO: Waiting for StatefulSet statefulset-5692/ss2 to complete update Nov 27 21:11:55.665: INFO: Waiting for Pod statefulset-5692/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Nov 27 21:12:05.678: INFO: Waiting for StatefulSet statefulset-5692/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Nov 27 21:12:15.682: INFO: Deleting all statefulset in ns statefulset-5692 Nov 27 21:12:15.685: INFO: Scaling statefulset ss2 to 0 Nov 27 21:12:45.704: INFO: Waiting for statefulset status.replicas updated to 0 Nov 27 21:12:45.708: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:12:45.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5692" for this suite. Nov 27 21:12:53.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:12:54.278: INFO: namespace statefulset-5692 deletion completed in 8.547722902s • [SLOW TEST:119.199 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:12:54.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 27 21:12:54.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1437a281-2760-4a74-a3b1-1a4a2c639272" in namespace "downward-api-4165" to be "success or failure" Nov 27 21:12:54.397: INFO: Pod "downwardapi-volume-1437a281-2760-4a74-a3b1-1a4a2c639272": Phase="Pending", Reason="", readiness=false. Elapsed: 17.323087ms Nov 27 21:12:56.404: INFO: Pod "downwardapi-volume-1437a281-2760-4a74-a3b1-1a4a2c639272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024543981s Nov 27 21:12:58.411: INFO: Pod "downwardapi-volume-1437a281-2760-4a74-a3b1-1a4a2c639272": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031598308s STEP: Saw pod success Nov 27 21:12:58.411: INFO: Pod "downwardapi-volume-1437a281-2760-4a74-a3b1-1a4a2c639272" satisfied condition "success or failure" Nov 27 21:12:58.417: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1437a281-2760-4a74-a3b1-1a4a2c639272 container client-container: STEP: delete the pod Nov 27 21:12:58.435: INFO: Waiting for pod downwardapi-volume-1437a281-2760-4a74-a3b1-1a4a2c639272 to disappear Nov 27 21:12:58.521: INFO: Pod downwardapi-volume-1437a281-2760-4a74-a3b1-1a4a2c639272 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:12:58.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4165" for this suite. Nov 27 21:13:04.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:13:04.735: INFO: namespace downward-api-4165 deletion completed in 6.198859964s • [SLOW TEST:10.454 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:13:04.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-eed9990b-5356-4f18-a8a6-662eedcdcc0f STEP: Creating a pod to test consume configMaps Nov 27 21:13:04.841: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e554e193-c4db-4025-a50d-3923dac25b81" in namespace "projected-8168" to be "success or failure" Nov 27 21:13:04.870: INFO: Pod "pod-projected-configmaps-e554e193-c4db-4025-a50d-3923dac25b81": Phase="Pending", Reason="", readiness=false. Elapsed: 28.775781ms Nov 27 21:13:06.874: INFO: Pod "pod-projected-configmaps-e554e193-c4db-4025-a50d-3923dac25b81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032573854s Nov 27 21:13:08.880: INFO: Pod "pod-projected-configmaps-e554e193-c4db-4025-a50d-3923dac25b81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038322179s STEP: Saw pod success Nov 27 21:13:08.880: INFO: Pod "pod-projected-configmaps-e554e193-c4db-4025-a50d-3923dac25b81" satisfied condition "success or failure" Nov 27 21:13:08.885: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-e554e193-c4db-4025-a50d-3923dac25b81 container projected-configmap-volume-test: STEP: delete the pod Nov 27 21:13:08.984: INFO: Waiting for pod pod-projected-configmaps-e554e193-c4db-4025-a50d-3923dac25b81 to disappear Nov 27 21:13:09.006: INFO: Pod pod-projected-configmaps-e554e193-c4db-4025-a50d-3923dac25b81 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:13:09.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8168" for this suite. Nov 27 21:13:15.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:13:15.146: INFO: namespace projected-8168 deletion completed in 6.133214732s • [SLOW TEST:10.408 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:13:15.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3543 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 27 21:13:15.199: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Nov 27 21:13:44.022: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.220:8080/dial?request=hostName&protocol=http&host=10.244.1.219&port=8080&tries=1'] Namespace:pod-network-test-3543 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:13:44.023: INFO: >>> kubeConfig: /root/.kube/config I1127 21:13:44.104517 7 log.go:172] (0x4000dd4f20) (0x40026294a0) Create stream I1127 21:13:44.105379 7 log.go:172] (0x4000dd4f20) (0x40026294a0) Stream added, broadcasting: 1 I1127 21:13:44.133778 7 log.go:172] (0x4000dd4f20) Reply frame received for 1 I1127 21:13:44.134674 7 log.go:172] (0x4000dd4f20) (0x4002a0a000) Create stream I1127 21:13:44.134794 7 log.go:172] (0x4000dd4f20) (0x4002a0a000) Stream added, broadcasting: 3 I1127 21:13:44.137691 7 log.go:172] (0x4000dd4f20) Reply frame received for 3 I1127 21:13:44.138015 7 log.go:172] (0x4000dd4f20) (0x4002a0a0a0) Create stream I1127 21:13:44.138110 7 log.go:172] (0x4000dd4f20) (0x4002a0a0a0) Stream added, broadcasting: 5 I1127 21:13:44.139423 7 log.go:172] (0x4000dd4f20) Reply frame received for 5 I1127 21:13:44.272586 7 log.go:172] (0x4000dd4f20) Data frame received for 5 I1127 21:13:44.273031 7 log.go:172] (0x4000dd4f20) Data frame received for 3 I1127 21:13:44.273222 7 log.go:172] (0x4002a0a0a0) (5) Data frame handling I1127 21:13:44.273453 7 log.go:172] (0x4002a0a000) (3) Data frame handling I1127 21:13:44.274007 7 log.go:172] (0x4000dd4f20) Data frame received for 1 I1127 21:13:44.274085 7 log.go:172] (0x40026294a0) (1) Data frame handling I1127 21:13:44.275456 7 log.go:172] (0x4002a0a000) (3) Data frame sent I1127 21:13:44.275563 7 log.go:172] (0x4000dd4f20) Data frame received for 3 I1127 21:13:44.275618 7 log.go:172] (0x4002a0a000) (3) Data frame handling I1127 21:13:44.275831 7 log.go:172] (0x40026294a0) (1) Data frame sent I1127 21:13:44.276788 7 log.go:172] (0x4000dd4f20) (0x40026294a0) Stream removed, broadcasting: 1 I1127 21:13:44.279391 7 log.go:172] (0x4000dd4f20) Go away received I1127 21:13:44.283047 7 log.go:172] (0x4000dd4f20) (0x40026294a0) Stream removed, broadcasting: 1 I1127 21:13:44.283376 7 log.go:172] (0x4000dd4f20) (0x4002a0a000) Stream removed, broadcasting: 3 I1127 21:13:44.283615 7 log.go:172] (0x4000dd4f20) (0x4002a0a0a0) Stream removed, broadcasting: 5 Nov 27 21:13:44.284: INFO: Waiting for endpoints: map[] Nov 27 21:13:44.290: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.220:8080/dial?request=hostName&protocol=http&host=10.244.2.160&port=8080&tries=1'] Namespace:pod-network-test-3543 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:13:44.290: INFO: >>> kubeConfig: /root/.kube/config I1127 21:13:44.352488 7 log.go:172] (0x4002282c60) (0x40032968c0) Create stream I1127 21:13:44.352634 7 log.go:172] (0x4002282c60) (0x40032968c0) Stream added, broadcasting: 1 I1127 21:13:44.356210 7 log.go:172] (0x4002282c60) Reply frame received for 1 I1127 21:13:44.356430 7 log.go:172] (0x4002282c60) (0x40029be460) Create stream I1127 21:13:44.356563 7 log.go:172] (0x4002282c60) (0x40029be460) Stream added, broadcasting: 3 I1127 21:13:44.358348 7 log.go:172] (0x4002282c60) Reply frame received for 3 I1127 21:13:44.358538 7 log.go:172] (0x4002282c60) (0x4003296960) Create stream I1127 21:13:44.358657 7 log.go:172] (0x4002282c60) (0x4003296960) Stream added, broadcasting: 5 I1127 21:13:44.360262 7 log.go:172] (0x4002282c60) Reply frame received for 5 I1127 21:13:44.423952 7 log.go:172] (0x4002282c60) Data frame received for 3 I1127 21:13:44.424139 7 log.go:172] (0x40029be460) (3) Data frame handling I1127 21:13:44.424297 7 log.go:172] (0x40029be460) (3) Data frame sent I1127 21:13:44.424987 7 log.go:172] (0x4002282c60) Data frame received for 3 I1127 21:13:44.425095 7 log.go:172] (0x40029be460) (3) Data frame handling I1127 21:13:44.425269 7 log.go:172] (0x4002282c60) Data frame received for 5 I1127 21:13:44.425399 7 log.go:172] (0x4003296960) (5) Data frame handling I1127 21:13:44.427252 7 log.go:172] (0x4002282c60) Data frame received for 1 I1127 21:13:44.427362 7 log.go:172] (0x40032968c0) (1) Data frame handling I1127 21:13:44.427473 7 log.go:172] (0x40032968c0) (1) Data frame sent I1127 21:13:44.427615 7 log.go:172] (0x4002282c60) (0x40032968c0) Stream removed, broadcasting: 1 I1127 21:13:44.427767 7 log.go:172] (0x4002282c60) Go away received I1127 21:13:44.428192 7 log.go:172] (0x4002282c60) (0x40032968c0) Stream removed, broadcasting: 1 I1127 21:13:44.428377 7 log.go:172] (0x4002282c60) (0x40029be460) Stream removed, broadcasting: 3 I1127 21:13:44.428493 7 log.go:172] (0x4002282c60) (0x4003296960) Stream removed, broadcasting: 5 Nov 27 21:13:44.428: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:13:44.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3543" for this suite. Nov 27 21:14:08.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:14:08.853: INFO: namespace pod-network-test-3543 deletion completed in 24.414293392s • [SLOW TEST:53.706 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:14:08.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Nov 27 21:14:08.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8915' Nov 27 21:14:13.213: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Nov 27 21:14:13.213: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Nov 27 21:14:13.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8915' Nov 27 21:14:14.636: INFO: stderr: "" Nov 27 21:14:14.636: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:14:14.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8915" for this suite. Nov 27 21:14:20.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:14:20.849: INFO: namespace kubectl-8915 deletion completed in 6.20032126s • [SLOW TEST:11.992 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:14:20.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-0f7c6fe0-810a-4653-b3b0-d900019d5848 STEP: Creating a pod to test consume secrets Nov 27 21:14:21.016: INFO: Waiting up to 5m0s for pod "pod-secrets-fccd7c78-4768-4101-b8f9-83276dc521d5" in namespace "secrets-2350" to be "success or failure" Nov 27 21:14:21.033: INFO: Pod "pod-secrets-fccd7c78-4768-4101-b8f9-83276dc521d5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.698072ms Nov 27 21:14:23.041: INFO: Pod "pod-secrets-fccd7c78-4768-4101-b8f9-83276dc521d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024131552s Nov 27 21:14:25.047: INFO: Pod "pod-secrets-fccd7c78-4768-4101-b8f9-83276dc521d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030804066s STEP: Saw pod success Nov 27 21:14:25.048: INFO: Pod "pod-secrets-fccd7c78-4768-4101-b8f9-83276dc521d5" satisfied condition "success or failure" Nov 27 21:14:25.052: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-fccd7c78-4768-4101-b8f9-83276dc521d5 container secret-volume-test: STEP: delete the pod Nov 27 21:14:25.228: INFO: Waiting for pod pod-secrets-fccd7c78-4768-4101-b8f9-83276dc521d5 to disappear Nov 27 21:14:25.236: INFO: Pod pod-secrets-fccd7c78-4768-4101-b8f9-83276dc521d5 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:14:25.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2350" for this suite. Nov 27 21:14:31.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:14:31.456: INFO: namespace secrets-2350 deletion completed in 6.212876549s • [SLOW TEST:10.602 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:14:31.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Nov 27 21:14:31.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4461' Nov 27 21:14:33.433: INFO: stderr: "" Nov 27 21:14:33.433: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 27 21:14:33.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4461' Nov 27 21:14:34.720: INFO: stderr: "" Nov 27 21:14:34.720: INFO: stdout: "update-demo-nautilus-g4hzd update-demo-nautilus-tpvvh " Nov 27 21:14:34.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4hzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4461' Nov 27 21:14:35.969: INFO: stderr: "" Nov 27 21:14:35.969: INFO: stdout: "" Nov 27 21:14:35.969: INFO: update-demo-nautilus-g4hzd is created but not running Nov 27 21:14:40.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4461' Nov 27 21:14:42.262: INFO: stderr: "" Nov 27 21:14:42.262: INFO: stdout: "update-demo-nautilus-g4hzd update-demo-nautilus-tpvvh " Nov 27 21:14:42.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4hzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4461' Nov 27 21:14:43.564: INFO: stderr: "" Nov 27 21:14:43.564: INFO: stdout: "true" Nov 27 21:14:43.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4hzd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4461' Nov 27 21:14:44.851: INFO: stderr: "" Nov 27 21:14:44.851: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 27 21:14:44.852: INFO: validating pod update-demo-nautilus-g4hzd Nov 27 21:14:44.858: INFO: got data: { "image": "nautilus.jpg" } Nov 27 21:14:44.859: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 27 21:14:44.859: INFO: update-demo-nautilus-g4hzd is verified up and running Nov 27 21:14:44.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpvvh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4461' Nov 27 21:14:46.138: INFO: stderr: "" Nov 27 21:14:46.138: INFO: stdout: "true" Nov 27 21:14:46.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpvvh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4461' Nov 27 21:14:47.454: INFO: stderr: "" Nov 27 21:14:47.454: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 27 21:14:47.454: INFO: validating pod update-demo-nautilus-tpvvh Nov 27 21:14:47.486: INFO: got data: { "image": "nautilus.jpg" } Nov 27 21:14:47.486: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 27 21:14:47.486: INFO: update-demo-nautilus-tpvvh is verified up and running STEP: rolling-update to new replication controller Nov 27 21:14:47.501: INFO: scanned /root for discovery docs: Nov 27 21:14:47.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4461' Nov 27 21:15:11.903: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Nov 27 21:15:11.903: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 27 21:15:11.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4461' Nov 27 21:15:13.191: INFO: stderr: "" Nov 27 21:15:13.191: INFO: stdout: "update-demo-kitten-8jms5 update-demo-kitten-9gtng " Nov 27 21:15:13.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8jms5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4461' Nov 27 21:15:14.462: INFO: stderr: "" Nov 27 21:15:14.462: INFO: stdout: "true" Nov 27 21:15:14.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8jms5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4461' Nov 27 21:15:15.741: INFO: stderr: "" Nov 27 21:15:15.742: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Nov 27 21:15:15.742: INFO: validating pod update-demo-kitten-8jms5 Nov 27 21:15:15.755: INFO: got data: { "image": "kitten.jpg" } Nov 27 21:15:15.755: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Nov 27 21:15:15.755: INFO: update-demo-kitten-8jms5 is verified up and running Nov 27 21:15:15.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9gtng -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4461' Nov 27 21:15:17.032: INFO: stderr: "" Nov 27 21:15:17.032: INFO: stdout: "true" Nov 27 21:15:17.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9gtng -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4461' Nov 27 21:15:18.292: INFO: stderr: "" Nov 27 21:15:18.292: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Nov 27 21:15:18.292: INFO: validating pod update-demo-kitten-9gtng Nov 27 21:15:18.305: INFO: got data: { "image": "kitten.jpg" } Nov 27 21:15:18.305: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Nov 27 21:15:18.305: INFO: update-demo-kitten-9gtng is verified up and running [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:15:18.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4461" for this suite. Nov 27 21:15:42.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:15:42.553: INFO: namespace kubectl-4461 deletion completed in 24.238037746s • [SLOW TEST:71.095 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:15:42.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-e25929de-b692-465f-9d92-edab5428cf31 STEP: Creating secret with name s-test-opt-upd-ab4051f8-cba5-43c8-890e-bc3b1856d67b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e25929de-b692-465f-9d92-edab5428cf31 STEP: Updating secret s-test-opt-upd-ab4051f8-cba5-43c8-890e-bc3b1856d67b STEP: Creating secret with name s-test-opt-create-50b88360-5c70-4b40-8bfb-5aff9a5d9795 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:17:11.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8741" for this suite. Nov 27 21:17:35.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:17:35.588: INFO: namespace secrets-8741 deletion completed in 24.182278547s • [SLOW TEST:113.034 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:17:35.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 27 21:17:39.750: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:17:39.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1119" for this suite. Nov 27 21:17:45.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:17:45.988: INFO: namespace container-runtime-1119 deletion completed in 6.169381018s • [SLOW TEST:10.394 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:17:45.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Nov 27 21:17:54.175: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 27 21:17:54.187: INFO: Pod pod-with-poststart-http-hook still exists Nov 27 21:17:56.188: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 27 21:17:56.197: INFO: Pod pod-with-poststart-http-hook still exists Nov 27 21:17:58.188: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 27 21:17:58.195: INFO: Pod pod-with-poststart-http-hook still exists Nov 27 21:18:00.188: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 27 21:18:00.195: INFO: Pod pod-with-poststart-http-hook still exists Nov 27 21:18:02.188: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 27 21:18:02.196: INFO: Pod pod-with-poststart-http-hook still exists Nov 27 21:18:04.188: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 27 21:18:04.195: INFO: Pod pod-with-poststart-http-hook still exists Nov 27 21:18:06.188: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 27 21:18:06.195: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:18:06.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9252" for this suite. Nov 27 21:18:28.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:18:28.409: INFO: namespace container-lifecycle-hook-9252 deletion completed in 22.203925329s • [SLOW TEST:42.417 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:18:28.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4151 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-4151 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4151 Nov 27 21:18:28.537: INFO: Found 0 stateful pods, waiting for 1 Nov 27 21:18:38.544: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Nov 27 21:18:38.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4151 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 27 21:18:40.078: INFO: stderr: "I1127 21:18:39.929007 1708 log.go:172] (0x400083c160) (0x40008fe1e0) Create stream\nI1127 21:18:39.932670 1708 log.go:172] (0x400083c160) (0x40008fe1e0) Stream added, broadcasting: 1\nI1127 21:18:39.948007 1708 log.go:172] (0x400083c160) Reply frame received for 1\nI1127 21:18:39.948712 1708 log.go:172] (0x400083c160) (0x40008fe280) Create stream\nI1127 21:18:39.948781 1708 log.go:172] (0x400083c160) (0x40008fe280) Stream added, broadcasting: 3\nI1127 21:18:39.950442 1708 log.go:172] (0x400083c160) Reply frame received for 3\nI1127 21:18:39.950758 1708 log.go:172] (0x400083c160) (0x400067e1e0) Create stream\nI1127 21:18:39.950834 1708 log.go:172] (0x400083c160) (0x400067e1e0) Stream added, broadcasting: 5\nI1127 21:18:39.952114 1708 log.go:172] (0x400083c160) Reply frame received for 5\nI1127 21:18:40.016986 1708 log.go:172] (0x400083c160) Data frame received for 5\nI1127 21:18:40.017201 1708 log.go:172] (0x400067e1e0) (5) Data frame handling\nI1127 21:18:40.017591 1708 log.go:172] (0x400067e1e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1127 21:18:40.054168 1708 log.go:172] (0x400083c160) Data frame received for 3\nI1127 21:18:40.054368 1708 log.go:172] (0x40008fe280) (3) Data frame handling\nI1127 21:18:40.054565 1708 log.go:172] (0x40008fe280) (3) Data frame sent\nI1127 21:18:40.054747 1708 log.go:172] (0x400083c160) Data frame received for 3\nI1127 21:18:40.054855 1708 log.go:172] (0x40008fe280) (3) Data frame handling\nI1127 21:18:40.054994 1708 log.go:172] (0x400083c160) Data frame received for 5\nI1127 21:18:40.055217 1708 log.go:172] (0x400067e1e0) (5) Data frame handling\nI1127 21:18:40.056335 1708 log.go:172] (0x400083c160) Data frame received for 1\nI1127 21:18:40.056468 1708 log.go:172] (0x40008fe1e0) (1) Data frame handling\nI1127 21:18:40.056591 1708 log.go:172] (0x40008fe1e0) (1) Data frame sent\nI1127 21:18:40.058153 1708 log.go:172] (0x400083c160) (0x40008fe1e0) Stream removed, broadcasting: 1\nI1127 21:18:40.061870 1708 log.go:172] (0x400083c160) Go away received\nI1127 21:18:40.065124 1708 log.go:172] (0x400083c160) (0x40008fe1e0) Stream removed, broadcasting: 1\nI1127 21:18:40.065482 1708 log.go:172] (0x400083c160) (0x40008fe280) Stream removed, broadcasting: 3\nI1127 21:18:40.065901 1708 log.go:172] (0x400083c160) (0x400067e1e0) Stream removed, broadcasting: 5\n" Nov 27 21:18:40.079: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 27 21:18:40.080: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 27 21:18:40.088: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 27 21:18:50.097: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 27 21:18:50.097: INFO: Waiting for statefulset status.replicas updated to 0 Nov 27 21:18:50.120: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 21:18:50.121: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:28 +0000 UTC }] Nov 27 21:18:50.122: INFO: ss-1 Pending [] Nov 27 21:18:50.122: INFO: Nov 27 21:18:50.122: INFO: StatefulSet ss has not reached scale 3, at 2 Nov 27 21:18:51.130: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988365133s Nov 27 21:18:52.437: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98026833s Nov 27 21:18:53.456: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.672932075s Nov 27 21:18:54.473: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.653608601s Nov 27 21:18:55.482: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.636958448s Nov 27 21:18:56.490: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.628052561s Nov 27 21:18:57.498: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.62043355s Nov 27 21:18:58.507: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.612303157s Nov 27 21:18:59.515: INFO: Verifying statefulset ss doesn't scale past 3 for another 603.165557ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4151 Nov 27 21:19:00.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4151 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:19:02.063: INFO: stderr: "I1127 21:19:01.934382 1732 log.go:172] (0x4000141080) (0x40005926e0) Create stream\nI1127 21:19:01.940068 1732 log.go:172] (0x4000141080) (0x40005926e0) Stream added, broadcasting: 1\nI1127 21:19:01.951463 1732 log.go:172] (0x4000141080) Reply frame received for 1\nI1127 21:19:01.952422 1732 log.go:172] (0x4000141080) (0x4000592780) Create stream\nI1127 21:19:01.952554 1732 log.go:172] (0x4000141080) (0x4000592780) Stream added, broadcasting: 3\nI1127 21:19:01.954318 1732 log.go:172] (0x4000141080) Reply frame received for 3\nI1127 21:19:01.954620 1732 log.go:172] (0x4000141080) (0x40009d4000) Create stream\nI1127 21:19:01.954696 1732 log.go:172] (0x4000141080) (0x40009d4000) Stream added, broadcasting: 5\nI1127 21:19:01.956062 1732 log.go:172] (0x4000141080) Reply frame received for 5\nI1127 21:19:02.038686 1732 log.go:172] (0x4000141080) Data frame received for 5\nI1127 21:19:02.039097 1732 log.go:172] (0x4000141080) Data frame received for 3\nI1127 21:19:02.039449 1732 log.go:172] (0x4000141080) Data frame received for 1\nI1127 21:19:02.039599 1732 log.go:172] (0x40005926e0) (1) Data frame handling\nI1127 21:19:02.040104 1732 log.go:172] (0x4000592780) (3) Data frame handling\nI1127 21:19:02.040333 1732 log.go:172] (0x40009d4000) (5) Data frame handling\nI1127 21:19:02.042346 1732 log.go:172] (0x40005926e0) (1) Data frame sent\nI1127 21:19:02.042462 1732 log.go:172] (0x40009d4000) (5) Data frame sent\nI1127 21:19:02.042868 1732 log.go:172] (0x4000592780) (3) Data frame sent\nI1127 21:19:02.043219 1732 log.go:172] (0x4000141080) Data frame received for 3\nI1127 21:19:02.043377 1732 log.go:172] (0x4000592780) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1127 21:19:02.043629 1732 log.go:172] (0x4000141080) Data frame received for 5\nI1127 21:19:02.043735 1732 log.go:172] (0x40009d4000) (5) Data frame handling\nI1127 21:19:02.047017 1732 log.go:172] (0x4000141080) (0x40005926e0) Stream removed, broadcasting: 1\nI1127 21:19:02.048983 1732 log.go:172] (0x4000141080) Go away received\nI1127 21:19:02.051651 1732 log.go:172] (0x4000141080) (0x40005926e0) Stream removed, broadcasting: 1\nI1127 21:19:02.051963 1732 log.go:172] (0x4000141080) (0x4000592780) Stream removed, broadcasting: 3\nI1127 21:19:02.052186 1732 log.go:172] (0x4000141080) (0x40009d4000) Stream removed, broadcasting: 5\n" Nov 27 21:19:02.064: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Nov 27 21:19:02.064: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Nov 27 21:19:02.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4151 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:19:03.595: INFO: stderr: "I1127 21:19:03.430394 1755 log.go:172] (0x40006b60b0) (0x40008f61e0) Create stream\nI1127 21:19:03.432627 1755 log.go:172] (0x40006b60b0) (0x40008f61e0) Stream added, broadcasting: 1\nI1127 21:19:03.446018 1755 log.go:172] (0x40006b60b0) Reply frame received for 1\nI1127 21:19:03.446907 1755 log.go:172] (0x40006b60b0) (0x4000648140) Create stream\nI1127 21:19:03.447000 1755 log.go:172] (0x40006b60b0) (0x4000648140) Stream added, broadcasting: 3\nI1127 21:19:03.449488 1755 log.go:172] (0x40006b60b0) Reply frame received for 3\nI1127 21:19:03.450106 1755 log.go:172] (0x40006b60b0) (0x40004e8000) Create stream\nI1127 21:19:03.450226 1755 log.go:172] (0x40006b60b0) (0x40004e8000) Stream added, broadcasting: 5\nI1127 21:19:03.451809 1755 log.go:172] (0x40006b60b0) Reply frame received for 5\nI1127 21:19:03.555059 1755 log.go:172] (0x40006b60b0) Data frame received for 5\nI1127 21:19:03.555356 1755 log.go:172] (0x40004e8000) (5) Data frame handling\nI1127 21:19:03.555916 1755 log.go:172] (0x40004e8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1127 21:19:03.575334 1755 log.go:172] (0x40006b60b0) Data frame received for 3\nI1127 21:19:03.575703 1755 log.go:172] (0x4000648140) (3) Data frame handling\nI1127 21:19:03.575868 1755 log.go:172] (0x4000648140) (3) Data frame sent\nI1127 21:19:03.576001 1755 log.go:172] (0x40006b60b0) Data frame received for 5\nI1127 21:19:03.576109 1755 log.go:172] (0x40004e8000) (5) Data frame handling\nI1127 21:19:03.576225 1755 log.go:172] (0x40004e8000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI1127 21:19:03.576372 1755 log.go:172] (0x40006b60b0) Data frame received for 5\nI1127 21:19:03.576532 1755 log.go:172] (0x40004e8000) (5) Data frame handling\nI1127 21:19:03.576657 1755 log.go:172] (0x40004e8000) (5) Data frame sent\nI1127 21:19:03.576748 1755 log.go:172] (0x40006b60b0) Data frame received for 5\nI1127 21:19:03.576825 1755 log.go:172] (0x40004e8000) (5) Data frame handling\n+ true\nI1127 21:19:03.577145 1755 log.go:172] (0x40006b60b0) Data frame received for 3\nI1127 21:19:03.577379 1755 log.go:172] (0x4000648140) (3) Data frame handling\nI1127 21:19:03.578233 1755 log.go:172] (0x40006b60b0) Data frame received for 1\nI1127 21:19:03.578398 1755 log.go:172] (0x40008f61e0) (1) Data frame handling\nI1127 21:19:03.578540 1755 log.go:172] (0x40008f61e0) (1) Data frame sent\nI1127 21:19:03.580233 1755 log.go:172] (0x40006b60b0) (0x40008f61e0) Stream removed, broadcasting: 1\nI1127 21:19:03.583444 1755 log.go:172] (0x40006b60b0) Go away received\nI1127 21:19:03.585522 1755 log.go:172] (0x40006b60b0) (0x40008f61e0) Stream removed, broadcasting: 1\nI1127 21:19:03.585847 1755 log.go:172] (0x40006b60b0) (0x4000648140) Stream removed, broadcasting: 3\nI1127 21:19:03.586113 1755 log.go:172] (0x40006b60b0) (0x40004e8000) Stream removed, broadcasting: 5\n" Nov 27 21:19:03.597: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Nov 27 21:19:03.597: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Nov 27 21:19:03.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4151 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 21:19:05.079: INFO: stderr: "I1127 21:19:04.970102 1778 log.go:172] (0x4000424000) (0x40008da1e0) Create stream\nI1127 21:19:04.973713 1778 log.go:172] (0x4000424000) (0x40008da1e0) Stream added, broadcasting: 1\nI1127 21:19:04.994357 1778 log.go:172] (0x4000424000) Reply frame received for 1\nI1127 21:19:04.994943 1778 log.go:172] (0x4000424000) (0x40008da280) Create stream\nI1127 21:19:04.995014 1778 log.go:172] (0x4000424000) (0x40008da280) Stream added, broadcasting: 3\nI1127 21:19:04.996703 1778 log.go:172] (0x4000424000) Reply frame received for 3\nI1127 21:19:04.997098 1778 log.go:172] (0x4000424000) (0x40006701e0) Create stream\nI1127 21:19:04.997163 1778 log.go:172] (0x4000424000) (0x40006701e0) Stream added, broadcasting: 5\nI1127 21:19:04.998203 1778 log.go:172] (0x4000424000) Reply frame received for 5\nI1127 21:19:05.056018 1778 log.go:172] (0x4000424000) Data frame received for 5\nI1127 21:19:05.056342 1778 log.go:172] (0x4000424000) Data frame received for 1\nI1127 21:19:05.056541 1778 log.go:172] (0x40006701e0) (5) Data frame handling\nI1127 21:19:05.056786 1778 log.go:172] (0x40008da1e0) (1) Data frame handling\nI1127 21:19:05.057203 1778 log.go:172] (0x4000424000) Data frame received for 3\nI1127 21:19:05.057444 1778 log.go:172] (0x40008da280) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1127 21:19:05.059052 1778 log.go:172] (0x40006701e0) (5) Data frame sent\nI1127 21:19:05.059366 1778 log.go:172] (0x40008da1e0) (1) Data frame sent\nI1127 21:19:05.059503 1778 log.go:172] (0x40008da280) (3) Data frame sent\nI1127 21:19:05.059863 1778 log.go:172] (0x4000424000) Data frame received for 5\nI1127 21:19:05.059971 1778 log.go:172] (0x40006701e0) (5) Data frame handling\nI1127 21:19:05.060151 1778 log.go:172] (0x4000424000) Data frame received for 3\nI1127 21:19:05.060256 1778 log.go:172] (0x40008da280) (3) Data frame handling\nI1127 21:19:05.063293 1778 log.go:172] (0x4000424000) (0x40008da1e0) Stream removed, broadcasting: 1\nI1127 21:19:05.064659 1778 log.go:172] (0x4000424000) Go away received\nI1127 21:19:05.068793 1778 log.go:172] (0x4000424000) (0x40008da1e0) Stream removed, broadcasting: 1\nI1127 21:19:05.069266 1778 log.go:172] (0x4000424000) (0x40008da280) Stream removed, broadcasting: 3\nI1127 21:19:05.069429 1778 log.go:172] (0x4000424000) (0x40006701e0) Stream removed, broadcasting: 5\n" Nov 27 21:19:05.080: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Nov 27 21:19:05.080: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Nov 27 21:19:05.087: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 27 21:19:05.087: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 27 21:19:05.087: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Nov 27 21:19:05.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4151 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 27 21:19:06.623: INFO: stderr: "I1127 21:19:06.471550 1801 log.go:172] (0x40006e6420) (0x400053c6e0) Create stream\nI1127 21:19:06.475318 1801 log.go:172] (0x40006e6420) (0x400053c6e0) Stream added, broadcasting: 1\nI1127 21:19:06.488743 1801 log.go:172] (0x40006e6420) Reply frame received for 1\nI1127 21:19:06.490057 1801 log.go:172] (0x40006e6420) (0x400064a1e0) Create stream\nI1127 21:19:06.490200 1801 log.go:172] (0x40006e6420) (0x400064a1e0) Stream added, broadcasting: 3\nI1127 21:19:06.492309 1801 log.go:172] (0x40006e6420) Reply frame received for 3\nI1127 21:19:06.492722 1801 log.go:172] (0x40006e6420) (0x4000520140) Create stream\nI1127 21:19:06.492807 1801 log.go:172] (0x40006e6420) (0x4000520140) Stream added, broadcasting: 5\nI1127 21:19:06.494495 1801 log.go:172] (0x40006e6420) Reply frame received for 5\nI1127 21:19:06.595505 1801 log.go:172] (0x40006e6420) Data frame received for 5\nI1127 21:19:06.595872 1801 log.go:172] (0x40006e6420) Data frame received for 1\nI1127 21:19:06.596015 1801 log.go:172] (0x4000520140) (5) Data frame handling\nI1127 21:19:06.596252 1801 log.go:172] (0x40006e6420) Data frame received for 3\nI1127 21:19:06.596389 1801 log.go:172] (0x400064a1e0) (3) Data frame handling\nI1127 21:19:06.596498 1801 log.go:172] (0x400053c6e0) (1) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1127 21:19:06.598373 1801 log.go:172] (0x400064a1e0) (3) Data frame sent\nI1127 21:19:06.598602 1801 log.go:172] (0x400053c6e0) (1) Data frame sent\nI1127 21:19:06.598796 1801 log.go:172] (0x4000520140) (5) Data frame sent\nI1127 21:19:06.598894 1801 log.go:172] (0x40006e6420) Data frame received for 5\nI1127 21:19:06.598952 1801 log.go:172] (0x4000520140) (5) Data frame handling\nI1127 21:19:06.599097 1801 log.go:172] (0x40006e6420) Data frame received for 3\nI1127 21:19:06.599197 1801 log.go:172] (0x400064a1e0) (3) Data frame handling\nI1127 21:19:06.600454 1801 log.go:172] (0x40006e6420) (0x400053c6e0) Stream removed, broadcasting: 1\nI1127 21:19:06.603821 1801 log.go:172] (0x40006e6420) Go away received\nI1127 21:19:06.615311 1801 log.go:172] (0x40006e6420) (0x400053c6e0) Stream removed, broadcasting: 1\nI1127 21:19:06.615597 1801 log.go:172] (0x40006e6420) (0x400064a1e0) Stream removed, broadcasting: 3\nI1127 21:19:06.615838 1801 log.go:172] (0x40006e6420) (0x4000520140) Stream removed, broadcasting: 5\n" Nov 27 21:19:06.624: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 27 21:19:06.624: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 27 21:19:06.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4151 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 27 21:19:08.086: INFO: stderr: "I1127 21:19:07.931712 1825 log.go:172] (0x40005fc420) (0x4000956820) Create stream\nI1127 21:19:07.936759 1825 log.go:172] (0x40005fc420) (0x4000956820) Stream added, broadcasting: 1\nI1127 21:19:07.950371 1825 log.go:172] (0x40005fc420) Reply frame received for 1\nI1127 21:19:07.950880 1825 log.go:172] (0x40005fc420) (0x40009100a0) Create stream\nI1127 21:19:07.950932 1825 log.go:172] (0x40005fc420) (0x40009100a0) Stream added, broadcasting: 3\nI1127 21:19:07.952315 1825 log.go:172] (0x40005fc420) Reply frame received for 3\nI1127 21:19:07.952560 1825 log.go:172] (0x40005fc420) (0x4000956000) Create stream\nI1127 21:19:07.952615 1825 log.go:172] (0x40005fc420) (0x4000956000) Stream added, broadcasting: 5\nI1127 21:19:07.953569 1825 log.go:172] (0x40005fc420) Reply frame received for 5\nI1127 21:19:08.039287 1825 log.go:172] (0x40005fc420) Data frame received for 5\nI1127 21:19:08.039456 1825 log.go:172] (0x4000956000) (5) Data frame handling\nI1127 21:19:08.039792 1825 log.go:172] (0x4000956000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1127 21:19:08.066466 1825 log.go:172] (0x40005fc420) Data frame received for 5\nI1127 21:19:08.066625 1825 log.go:172] (0x4000956000) (5) Data frame handling\nI1127 21:19:08.066739 1825 log.go:172] (0x40005fc420) Data frame received for 3\nI1127 21:19:08.066880 1825 log.go:172] (0x40009100a0) (3) Data frame handling\nI1127 21:19:08.067086 1825 log.go:172] (0x40009100a0) (3) Data frame sent\nI1127 21:19:08.067260 1825 log.go:172] (0x40005fc420) Data frame received for 3\nI1127 21:19:08.067334 1825 log.go:172] (0x40005fc420) Data frame received for 1\nI1127 21:19:08.067395 1825 log.go:172] (0x4000956820) (1) Data frame handling\nI1127 21:19:08.067468 1825 log.go:172] (0x4000956820) (1) Data frame sent\nI1127 21:19:08.067516 1825 log.go:172] (0x40009100a0) (3) Data frame handling\nI1127 21:19:08.069933 1825 log.go:172] (0x40005fc420) (0x4000956820) Stream removed, broadcasting: 1\nI1127 21:19:08.072149 1825 log.go:172] (0x40005fc420) Go away received\nI1127 21:19:08.077225 1825 log.go:172] (0x40005fc420) (0x4000956820) Stream removed, broadcasting: 1\nI1127 21:19:08.077469 1825 log.go:172] (0x40005fc420) (0x40009100a0) Stream removed, broadcasting: 3\nI1127 21:19:08.077650 1825 log.go:172] (0x40005fc420) (0x4000956000) Stream removed, broadcasting: 5\n" Nov 27 21:19:08.087: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 27 21:19:08.087: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 27 21:19:08.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4151 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 27 21:19:09.581: INFO: stderr: "I1127 21:19:09.446855 1848 log.go:172] (0x4000aca0b0) (0x400082a6e0) Create stream\nI1127 21:19:09.451899 1848 log.go:172] (0x4000aca0b0) (0x400082a6e0) Stream added, broadcasting: 1\nI1127 21:19:09.463643 1848 log.go:172] (0x4000aca0b0) Reply frame received for 1\nI1127 21:19:09.464281 1848 log.go:172] (0x4000aca0b0) (0x400082a780) Create stream\nI1127 21:19:09.464355 1848 log.go:172] (0x4000aca0b0) (0x400082a780) Stream added, broadcasting: 3\nI1127 21:19:09.465830 1848 log.go:172] (0x4000aca0b0) Reply frame received for 3\nI1127 21:19:09.466145 1848 log.go:172] (0x4000aca0b0) (0x400067a280) Create stream\nI1127 21:19:09.466213 1848 log.go:172] (0x4000aca0b0) (0x400067a280) Stream added, broadcasting: 5\nI1127 21:19:09.467640 1848 log.go:172] (0x4000aca0b0) Reply frame received for 5\nI1127 21:19:09.520931 1848 log.go:172] (0x4000aca0b0) Data frame received for 5\nI1127 21:19:09.521173 1848 log.go:172] (0x400067a280) (5) Data frame handling\nI1127 21:19:09.521704 1848 log.go:172] (0x400067a280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1127 21:19:09.558017 1848 log.go:172] (0x4000aca0b0) Data frame received for 3\nI1127 21:19:09.558140 1848 log.go:172] (0x400082a780) (3) Data frame handling\nI1127 21:19:09.558271 1848 log.go:172] (0x4000aca0b0) Data frame received for 5\nI1127 21:19:09.558425 1848 log.go:172] (0x400067a280) (5) Data frame handling\nI1127 21:19:09.558547 1848 log.go:172] (0x400082a780) (3) Data frame sent\nI1127 21:19:09.558708 1848 log.go:172] (0x4000aca0b0) Data frame received for 3\nI1127 21:19:09.558834 1848 log.go:172] (0x400082a780) (3) Data frame handling\nI1127 21:19:09.559644 1848 log.go:172] (0x4000aca0b0) Data frame received for 1\nI1127 21:19:09.559789 1848 log.go:172] (0x400082a6e0) (1) Data frame handling\nI1127 21:19:09.559937 1848 log.go:172] (0x400082a6e0) (1) Data frame sent\nI1127 21:19:09.562462 1848 log.go:172] (0x4000aca0b0) (0x400082a6e0) Stream removed, broadcasting: 1\nI1127 21:19:09.565534 1848 log.go:172] (0x4000aca0b0) Go away received\nI1127 21:19:09.570717 1848 log.go:172] (0x4000aca0b0) (0x400082a6e0) Stream removed, broadcasting: 1\nI1127 21:19:09.570973 1848 log.go:172] (0x4000aca0b0) (0x400082a780) Stream removed, broadcasting: 3\nI1127 21:19:09.571164 1848 log.go:172] (0x4000aca0b0) (0x400067a280) Stream removed, broadcasting: 5\n" Nov 27 21:19:09.582: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 27 21:19:09.582: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 27 21:19:09.582: INFO: Waiting for statefulset status.replicas updated to 0 Nov 27 21:19:09.587: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 27 21:19:19.601: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 27 21:19:19.601: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 27 21:19:19.601: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 27 21:19:19.617: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 21:19:19.617: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:28 +0000 UTC }] Nov 27 21:19:19.618: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC }] Nov 27 21:19:19.618: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC }] Nov 27 21:19:19.618: INFO: Nov 27 21:19:19.618: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 27 21:19:20.627: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 21:19:20.628: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:28 +0000 UTC }] Nov 27 21:19:20.628: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC }] Nov 27 21:19:20.629: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC }] Nov 27 21:19:20.629: INFO: Nov 27 21:19:20.629: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 27 21:19:21.638: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 21:19:21.638: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:28 +0000 UTC }] Nov 27 21:19:21.638: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC }] Nov 27 21:19:21.638: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC }] Nov 27 21:19:21.639: INFO: Nov 27 21:19:21.639: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 27 21:19:22.647: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 21:19:22.647: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:28 +0000 UTC }] Nov 27 21:19:22.648: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC }] Nov 27 21:19:22.648: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC }] Nov 27 21:19:22.648: INFO: Nov 27 21:19:22.648: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 27 21:19:23.674: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 21:19:23.674: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC }] Nov 27 21:19:23.674: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC }] Nov 27 21:19:23.675: INFO: Nov 27 21:19:23.675: INFO: StatefulSet ss has not reached scale 0, at 2 Nov 27 21:19:24.683: INFO: POD NODE PHASE GRACE CONDITIONS Nov 27 21:19:24.683: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC }] Nov 27 21:19:24.683: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:19:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:18:50 +0000 UTC }] Nov 27 21:19:24.684: INFO: Nov 27 21:19:24.684: INFO: StatefulSet ss has not reached scale 0, at 2 Nov 27 21:19:25.690: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.924401975s Nov 27 21:19:26.697: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.917886479s Nov 27 21:19:27.704: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.91099765s Nov 27 21:19:28.711: INFO: Verifying statefulset ss doesn't scale past 0 for another 903.882315ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4151 Nov 27 21:19:29.718: INFO: Scaling statefulset ss to 0 Nov 27 21:19:29.733: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Nov 27 21:19:29.737: INFO: Deleting all statefulset in ns statefulset-4151 Nov 27 21:19:29.741: INFO: Scaling statefulset ss to 0 Nov 27 21:19:29.752: INFO: Waiting for statefulset status.replicas updated to 0 Nov 27 21:19:29.756: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:19:29.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4151" for this suite. Nov 27 21:19:35.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:19:35.985: INFO: namespace statefulset-4151 deletion completed in 6.20203304s • [SLOW TEST:67.575 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:19:35.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 27 21:19:36.089: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a401b17-7837-483b-9550-53cf7960a4f6" in namespace "projected-4257" to be "success or failure" Nov 27 21:19:36.101: INFO: Pod "downwardapi-volume-4a401b17-7837-483b-9550-53cf7960a4f6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.086029ms Nov 27 21:19:38.107: INFO: Pod "downwardapi-volume-4a401b17-7837-483b-9550-53cf7960a4f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018498202s Nov 27 21:19:40.115: INFO: Pod "downwardapi-volume-4a401b17-7837-483b-9550-53cf7960a4f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026105919s STEP: Saw pod success Nov 27 21:19:40.115: INFO: Pod "downwardapi-volume-4a401b17-7837-483b-9550-53cf7960a4f6" satisfied condition "success or failure" Nov 27 21:19:40.121: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4a401b17-7837-483b-9550-53cf7960a4f6 container client-container: STEP: delete the pod Nov 27 21:19:40.171: INFO: Waiting for pod downwardapi-volume-4a401b17-7837-483b-9550-53cf7960a4f6 to disappear Nov 27 21:19:40.211: INFO: Pod downwardapi-volume-4a401b17-7837-483b-9550-53cf7960a4f6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:19:40.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4257" for this suite. Nov 27 21:19:46.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:19:46.402: INFO: namespace projected-4257 deletion completed in 6.181658064s • [SLOW TEST:10.415 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:19:46.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-fe60f1a6-23c8-4564-9b2a-fc7c63620153 STEP: Creating secret with name secret-projected-all-test-volume-0d5ce26e-fcea-4105-9c3a-bff8c12b0fc7 STEP: Creating a pod to test Check all projections for projected volume plugin Nov 27 21:19:46.502: INFO: Waiting up to 5m0s for pod "projected-volume-6cb2fe3e-1ea7-4129-bb70-c637aad46a13" in namespace "projected-8774" to be "success or failure" Nov 27 21:19:46.547: INFO: Pod "projected-volume-6cb2fe3e-1ea7-4129-bb70-c637aad46a13": Phase="Pending", Reason="", readiness=false. Elapsed: 44.77051ms Nov 27 21:19:48.554: INFO: Pod "projected-volume-6cb2fe3e-1ea7-4129-bb70-c637aad46a13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051575996s Nov 27 21:19:50.561: INFO: Pod "projected-volume-6cb2fe3e-1ea7-4129-bb70-c637aad46a13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058026072s STEP: Saw pod success Nov 27 21:19:50.561: INFO: Pod "projected-volume-6cb2fe3e-1ea7-4129-bb70-c637aad46a13" satisfied condition "success or failure" Nov 27 21:19:50.565: INFO: Trying to get logs from node iruya-worker pod projected-volume-6cb2fe3e-1ea7-4129-bb70-c637aad46a13 container projected-all-volume-test: STEP: delete the pod Nov 27 21:19:50.597: INFO: Waiting for pod projected-volume-6cb2fe3e-1ea7-4129-bb70-c637aad46a13 to disappear Nov 27 21:19:50.624: INFO: Pod projected-volume-6cb2fe3e-1ea7-4129-bb70-c637aad46a13 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:19:50.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8774" for this suite. Nov 27 21:19:56.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:19:56.817: INFO: namespace projected-8774 deletion completed in 6.183353167s • [SLOW TEST:10.414 seconds] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:19:56.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-e7d799cc-87a5-4ad7-83f6-fd31867ac4ab STEP: Creating a pod to test consume configMaps Nov 27 21:19:56.960: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4a033b43-ee7c-4b19-afbc-c76d518f3210" in namespace "projected-7272" to be "success or failure" Nov 27 21:19:56.976: INFO: Pod "pod-projected-configmaps-4a033b43-ee7c-4b19-afbc-c76d518f3210": Phase="Pending", Reason="", readiness=false. Elapsed: 15.143832ms Nov 27 21:19:59.006: INFO: Pod "pod-projected-configmaps-4a033b43-ee7c-4b19-afbc-c76d518f3210": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045392962s Nov 27 21:20:01.086: INFO: Pod "pod-projected-configmaps-4a033b43-ee7c-4b19-afbc-c76d518f3210": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125748668s STEP: Saw pod success Nov 27 21:20:01.087: INFO: Pod "pod-projected-configmaps-4a033b43-ee7c-4b19-afbc-c76d518f3210" satisfied condition "success or failure" Nov 27 21:20:01.091: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-4a033b43-ee7c-4b19-afbc-c76d518f3210 container projected-configmap-volume-test: STEP: delete the pod Nov 27 21:20:01.133: INFO: Waiting for pod pod-projected-configmaps-4a033b43-ee7c-4b19-afbc-c76d518f3210 to disappear Nov 27 21:20:01.164: INFO: Pod pod-projected-configmaps-4a033b43-ee7c-4b19-afbc-c76d518f3210 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:20:01.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7272" for this suite. Nov 27 21:20:07.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:20:07.369: INFO: namespace projected-7272 deletion completed in 6.195046504s • [SLOW TEST:10.550 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:20:07.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Nov 27 21:20:07.466: INFO: Waiting up to 5m0s for pod "client-containers-04e39e4e-bb4f-444b-9c9a-41f6b643b898" in namespace "containers-5332" to be "success or failure" Nov 27 21:20:07.484: INFO: Pod "client-containers-04e39e4e-bb4f-444b-9c9a-41f6b643b898": Phase="Pending", Reason="", readiness=false. Elapsed: 18.219611ms Nov 27 21:20:09.499: INFO: Pod "client-containers-04e39e4e-bb4f-444b-9c9a-41f6b643b898": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032755261s Nov 27 21:20:11.506: INFO: Pod "client-containers-04e39e4e-bb4f-444b-9c9a-41f6b643b898": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03943366s STEP: Saw pod success Nov 27 21:20:11.506: INFO: Pod "client-containers-04e39e4e-bb4f-444b-9c9a-41f6b643b898" satisfied condition "success or failure" Nov 27 21:20:11.511: INFO: Trying to get logs from node iruya-worker pod client-containers-04e39e4e-bb4f-444b-9c9a-41f6b643b898 container test-container: STEP: delete the pod Nov 27 21:20:11.533: INFO: Waiting for pod client-containers-04e39e4e-bb4f-444b-9c9a-41f6b643b898 to disappear Nov 27 21:20:11.544: INFO: Pod client-containers-04e39e4e-bb4f-444b-9c9a-41f6b643b898 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:20:11.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5332" for this suite. Nov 27 21:20:17.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:20:17.731: INFO: namespace containers-5332 deletion completed in 6.178346265s • [SLOW TEST:10.361 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:20:17.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Nov 27 21:20:17.841: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:20:28.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4366" for this suite. Nov 27 21:20:50.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:20:50.842: INFO: namespace init-container-4366 deletion completed in 22.18509696s • [SLOW TEST:33.110 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:20:50.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Nov 27 21:20:50.933: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:20:56.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2759" for this suite. Nov 27 21:21:02.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:21:03.075: INFO: namespace init-container-2759 deletion completed in 6.230510491s • [SLOW TEST:12.231 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:21:03.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3383/configmap-test-0d37e8fd-8878-46cb-80c8-59188c6997ab STEP: Creating a pod to test consume configMaps Nov 27 21:21:03.237: INFO: Waiting up to 5m0s for pod "pod-configmaps-01d24238-de75-4e56-ab73-76038b55e609" in namespace "configmap-3383" to be "success or failure" Nov 27 21:21:03.258: INFO: Pod "pod-configmaps-01d24238-de75-4e56-ab73-76038b55e609": Phase="Pending", Reason="", readiness=false. Elapsed: 20.612647ms Nov 27 21:21:05.269: INFO: Pod "pod-configmaps-01d24238-de75-4e56-ab73-76038b55e609": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03196959s Nov 27 21:21:07.275: INFO: Pod "pod-configmaps-01d24238-de75-4e56-ab73-76038b55e609": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038150977s STEP: Saw pod success Nov 27 21:21:07.276: INFO: Pod "pod-configmaps-01d24238-de75-4e56-ab73-76038b55e609" satisfied condition "success or failure" Nov 27 21:21:07.280: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-01d24238-de75-4e56-ab73-76038b55e609 container env-test: STEP: delete the pod Nov 27 21:21:07.307: INFO: Waiting for pod pod-configmaps-01d24238-de75-4e56-ab73-76038b55e609 to disappear Nov 27 21:21:07.317: INFO: Pod pod-configmaps-01d24238-de75-4e56-ab73-76038b55e609 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:21:07.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3383" for this suite. Nov 27 21:21:13.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:21:13.791: INFO: namespace configmap-3383 deletion completed in 6.375449789s • [SLOW TEST:10.715 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:21:13.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b6af48f7-72ef-48ae-98b3-dbc9c90a32ca STEP: Creating configMap with name cm-test-opt-upd-94e5a6bf-6f7a-4d16-93b3-9c2c8ae88a59 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b6af48f7-72ef-48ae-98b3-dbc9c90a32ca STEP: Updating configmap cm-test-opt-upd-94e5a6bf-6f7a-4d16-93b3-9c2c8ae88a59 STEP: Creating configMap with name cm-test-opt-create-3fc4ed23-2bed-4b1d-ac64-6b7d3702b2b8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:21:22.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8567" for this suite. Nov 27 21:21:46.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:21:46.271: INFO: namespace projected-8567 deletion completed in 24.200545087s • [SLOW TEST:32.472 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:21:46.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Nov 27 21:21:56.493: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8426 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:21:56.493: INFO: >>> kubeConfig: /root/.kube/config I1127 21:21:56.558582 7 log.go:172] (0x4003482e70) (0x40026292c0) Create stream I1127 21:21:56.558750 7 log.go:172] (0x4003482e70) (0x40026292c0) Stream added, broadcasting: 1 I1127 21:21:56.562822 7 log.go:172] (0x4003482e70) Reply frame received for 1 I1127 21:21:56.563051 7 log.go:172] (0x4003482e70) (0x40029005a0) Create stream I1127 21:21:56.563164 7 log.go:172] (0x4003482e70) (0x40029005a0) Stream added, broadcasting: 3 I1127 21:21:56.565292 7 log.go:172] (0x4003482e70) Reply frame received for 3 I1127 21:21:56.565566 7 log.go:172] (0x4003482e70) (0x4001639e00) Create stream I1127 21:21:56.565649 7 log.go:172] (0x4003482e70) (0x4001639e00) Stream added, broadcasting: 5 I1127 21:21:56.567457 7 log.go:172] (0x4003482e70) Reply frame received for 5 I1127 21:21:56.636658 7 log.go:172] (0x4003482e70) Data frame received for 5 I1127 21:21:56.636822 7 log.go:172] (0x4001639e00) (5) Data frame handling I1127 21:21:56.637104 7 log.go:172] (0x4003482e70) Data frame received for 3 I1127 21:21:56.637309 7 log.go:172] (0x40029005a0) (3) Data frame handling I1127 21:21:56.637517 7 log.go:172] (0x40029005a0) (3) Data frame sent I1127 21:21:56.637701 7 log.go:172] (0x4003482e70) Data frame received for 3 I1127 21:21:56.637869 7 log.go:172] (0x40029005a0) (3) Data frame handling I1127 21:21:56.638563 7 log.go:172] (0x4003482e70) Data frame received for 1 I1127 21:21:56.638630 7 log.go:172] (0x40026292c0) (1) Data frame handling I1127 21:21:56.638692 7 log.go:172] (0x40026292c0) (1) Data frame sent I1127 21:21:56.638797 7 log.go:172] (0x4003482e70) (0x40026292c0) Stream removed, broadcasting: 1 I1127 21:21:56.639134 7 log.go:172] (0x4003482e70) Go away received I1127 21:21:56.639336 7 log.go:172] (0x4003482e70) (0x40026292c0) Stream removed, broadcasting: 1 I1127 21:21:56.639438 7 log.go:172] (0x4003482e70) (0x40029005a0) Stream removed, broadcasting: 3 I1127 21:21:56.639511 7 log.go:172] (0x4003482e70) (0x4001639e00) Stream removed, broadcasting: 5 Nov 27 21:21:56.639: INFO: Exec stderr: "" Nov 27 21:21:56.640: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8426 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:21:56.640: INFO: >>> kubeConfig: /root/.kube/config I1127 21:21:56.699633 7 log.go:172] (0x40031048f0) (0x40031061e0) Create stream I1127 21:21:56.699766 7 log.go:172] (0x40031048f0) (0x40031061e0) Stream added, broadcasting: 1 I1127 21:21:56.703459 7 log.go:172] (0x40031048f0) Reply frame received for 1 I1127 21:21:56.703782 7 log.go:172] (0x40031048f0) (0x4003106280) Create stream I1127 21:21:56.703912 7 log.go:172] (0x40031048f0) (0x4003106280) Stream added, broadcasting: 3 I1127 21:21:56.705747 7 log.go:172] (0x40031048f0) Reply frame received for 3 I1127 21:21:56.705907 7 log.go:172] (0x40031048f0) (0x400181f400) Create stream I1127 21:21:56.705996 7 log.go:172] (0x40031048f0) (0x400181f400) Stream added, broadcasting: 5 I1127 21:21:56.707518 7 log.go:172] (0x40031048f0) Reply frame received for 5 I1127 21:21:56.760115 7 log.go:172] (0x40031048f0) Data frame received for 3 I1127 21:21:56.760271 7 log.go:172] (0x4003106280) (3) Data frame handling I1127 21:21:56.760354 7 log.go:172] (0x4003106280) (3) Data frame sent I1127 21:21:56.760428 7 log.go:172] (0x40031048f0) Data frame received for 3 I1127 21:21:56.760491 7 log.go:172] (0x4003106280) (3) Data frame handling I1127 21:21:56.760659 7 log.go:172] (0x40031048f0) Data frame received for 5 I1127 21:21:56.760806 7 log.go:172] (0x400181f400) (5) Data frame handling I1127 21:21:56.761794 7 log.go:172] (0x40031048f0) Data frame received for 1 I1127 21:21:56.761966 7 log.go:172] (0x40031061e0) (1) Data frame handling I1127 21:21:56.762100 7 log.go:172] (0x40031061e0) (1) Data frame sent I1127 21:21:56.762236 7 log.go:172] (0x40031048f0) (0x40031061e0) Stream removed, broadcasting: 1 I1127 21:21:56.762375 7 log.go:172] (0x40031048f0) Go away received I1127 21:21:56.762803 7 log.go:172] (0x40031048f0) (0x40031061e0) Stream removed, broadcasting: 1 I1127 21:21:56.763006 7 log.go:172] (0x40031048f0) (0x4003106280) Stream removed, broadcasting: 3 I1127 21:21:56.763123 7 log.go:172] (0x40031048f0) (0x400181f400) Stream removed, broadcasting: 5 Nov 27 21:21:56.763: INFO: Exec stderr: "" Nov 27 21:21:56.763: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8426 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:21:56.763: INFO: >>> kubeConfig: /root/.kube/config I1127 21:21:56.827248 7 log.go:172] (0x40020ff290) (0x4002bbd2c0) Create stream I1127 21:21:56.827402 7 log.go:172] (0x40020ff290) (0x4002bbd2c0) Stream added, broadcasting: 1 I1127 21:21:56.831418 7 log.go:172] (0x40020ff290) Reply frame received for 1 I1127 21:21:56.831663 7 log.go:172] (0x40020ff290) (0x4002bbd360) Create stream I1127 21:21:56.831798 7 log.go:172] (0x40020ff290) (0x4002bbd360) Stream added, broadcasting: 3 I1127 21:21:56.833783 7 log.go:172] (0x40020ff290) Reply frame received for 3 I1127 21:21:56.833934 7 log.go:172] (0x40020ff290) (0x4002834640) Create stream I1127 21:21:56.834018 7 log.go:172] (0x40020ff290) (0x4002834640) Stream added, broadcasting: 5 I1127 21:21:56.835440 7 log.go:172] (0x40020ff290) Reply frame received for 5 I1127 21:21:56.909132 7 log.go:172] (0x40020ff290) Data frame received for 5 I1127 21:21:56.909275 7 log.go:172] (0x4002834640) (5) Data frame handling I1127 21:21:56.909504 7 log.go:172] (0x40020ff290) Data frame received for 3 I1127 21:21:56.909670 7 log.go:172] (0x4002bbd360) (3) Data frame handling I1127 21:21:56.909840 7 log.go:172] (0x4002bbd360) (3) Data frame sent I1127 21:21:56.909971 7 log.go:172] (0x40020ff290) Data frame received for 3 I1127 21:21:56.910061 7 log.go:172] (0x4002bbd360) (3) Data frame handling I1127 21:21:56.911134 7 log.go:172] (0x40020ff290) Data frame received for 1 I1127 21:21:56.911209 7 log.go:172] (0x4002bbd2c0) (1) Data frame handling I1127 21:21:56.911279 7 log.go:172] (0x4002bbd2c0) (1) Data frame sent I1127 21:21:56.911350 7 log.go:172] (0x40020ff290) (0x4002bbd2c0) Stream removed, broadcasting: 1 I1127 21:21:56.911509 7 log.go:172] (0x40020ff290) Go away received I1127 21:21:56.911829 7 log.go:172] (0x40020ff290) (0x4002bbd2c0) Stream removed, broadcasting: 1 I1127 21:21:56.911931 7 log.go:172] (0x40020ff290) (0x4002bbd360) Stream removed, broadcasting: 3 I1127 21:21:56.912034 7 log.go:172] (0x40020ff290) (0x4002834640) Stream removed, broadcasting: 5 Nov 27 21:21:56.912: INFO: Exec stderr: "" Nov 27 21:21:56.912: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8426 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:21:56.912: INFO: >>> kubeConfig: /root/.kube/config I1127 21:21:56.968956 7 log.go:172] (0x4003164160) (0x4002bbd680) Create stream I1127 21:21:56.969168 7 log.go:172] (0x4003164160) (0x4002bbd680) Stream added, broadcasting: 1 I1127 21:21:56.973094 7 log.go:172] (0x4003164160) Reply frame received for 1 I1127 21:21:56.973258 7 log.go:172] (0x4003164160) (0x4002900640) Create stream I1127 21:21:56.973379 7 log.go:172] (0x4003164160) (0x4002900640) Stream added, broadcasting: 3 I1127 21:21:56.975029 7 log.go:172] (0x4003164160) Reply frame received for 3 I1127 21:21:56.975175 7 log.go:172] (0x4003164160) (0x40029006e0) Create stream I1127 21:21:56.975257 7 log.go:172] (0x4003164160) (0x40029006e0) Stream added, broadcasting: 5 I1127 21:21:56.976796 7 log.go:172] (0x4003164160) Reply frame received for 5 I1127 21:21:57.033653 7 log.go:172] (0x4003164160) Data frame received for 3 I1127 21:21:57.033948 7 log.go:172] (0x4002900640) (3) Data frame handling I1127 21:21:57.034086 7 log.go:172] (0x4002900640) (3) Data frame sent I1127 21:21:57.034234 7 log.go:172] (0x4003164160) Data frame received for 3 I1127 21:21:57.034387 7 log.go:172] (0x4002900640) (3) Data frame handling I1127 21:21:57.034648 7 log.go:172] (0x4003164160) Data frame received for 5 I1127 21:21:57.034816 7 log.go:172] (0x40029006e0) (5) Data frame handling I1127 21:21:57.035158 7 log.go:172] (0x4003164160) Data frame received for 1 I1127 21:21:57.035228 7 log.go:172] (0x4002bbd680) (1) Data frame handling I1127 21:21:57.035292 7 log.go:172] (0x4002bbd680) (1) Data frame sent I1127 21:21:57.035363 7 log.go:172] (0x4003164160) (0x4002bbd680) Stream removed, broadcasting: 1 I1127 21:21:57.035450 7 log.go:172] (0x4003164160) Go away received I1127 21:21:57.035826 7 log.go:172] (0x4003164160) (0x4002bbd680) Stream removed, broadcasting: 1 I1127 21:21:57.035941 7 log.go:172] (0x4003164160) (0x4002900640) Stream removed, broadcasting: 3 I1127 21:21:57.036040 7 log.go:172] (0x4003164160) (0x40029006e0) Stream removed, broadcasting: 5 Nov 27 21:21:57.036: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Nov 27 21:21:57.036: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8426 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:21:57.036: INFO: >>> kubeConfig: /root/.kube/config I1127 21:21:57.101171 7 log.go:172] (0x4003105d90) (0x40031065a0) Create stream I1127 21:21:57.101302 7 log.go:172] (0x4003105d90) (0x40031065a0) Stream added, broadcasting: 1 I1127 21:21:57.106274 7 log.go:172] (0x4003105d90) Reply frame received for 1 I1127 21:21:57.106479 7 log.go:172] (0x4003105d90) (0x4003106640) Create stream I1127 21:21:57.106589 7 log.go:172] (0x4003105d90) (0x4003106640) Stream added, broadcasting: 3 I1127 21:21:57.108568 7 log.go:172] (0x4003105d90) Reply frame received for 3 I1127 21:21:57.108750 7 log.go:172] (0x4003105d90) (0x40031066e0) Create stream I1127 21:21:57.108823 7 log.go:172] (0x4003105d90) (0x40031066e0) Stream added, broadcasting: 5 I1127 21:21:57.110249 7 log.go:172] (0x4003105d90) Reply frame received for 5 I1127 21:21:57.165642 7 log.go:172] (0x4003105d90) Data frame received for 3 I1127 21:21:57.165819 7 log.go:172] (0x4003106640) (3) Data frame handling I1127 21:21:57.165960 7 log.go:172] (0x4003105d90) Data frame received for 5 I1127 21:21:57.166125 7 log.go:172] (0x40031066e0) (5) Data frame handling I1127 21:21:57.166311 7 log.go:172] (0x4003106640) (3) Data frame sent I1127 21:21:57.166438 7 log.go:172] (0x4003105d90) Data frame received for 3 I1127 21:21:57.166550 7 log.go:172] (0x4003106640) (3) Data frame handling I1127 21:21:57.166909 7 log.go:172] (0x4003105d90) Data frame received for 1 I1127 21:21:57.167047 7 log.go:172] (0x40031065a0) (1) Data frame handling I1127 21:21:57.167145 7 log.go:172] (0x40031065a0) (1) Data frame sent I1127 21:21:57.167244 7 log.go:172] (0x4003105d90) (0x40031065a0) Stream removed, broadcasting: 1 I1127 21:21:57.167351 7 log.go:172] (0x4003105d90) Go away received I1127 21:21:57.168261 7 log.go:172] (0x4003105d90) (0x40031065a0) Stream removed, broadcasting: 1 I1127 21:21:57.168393 7 log.go:172] (0x4003105d90) (0x4003106640) Stream removed, broadcasting: 3 I1127 21:21:57.168503 7 log.go:172] (0x4003105d90) (0x40031066e0) Stream removed, broadcasting: 5 Nov 27 21:21:57.168: INFO: Exec stderr: "" Nov 27 21:21:57.168: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8426 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:21:57.168: INFO: >>> kubeConfig: /root/.kube/config I1127 21:21:57.227108 7 log.go:172] (0x4003098fd0) (0x4003106a00) Create stream I1127 21:21:57.227291 7 log.go:172] (0x4003098fd0) (0x4003106a00) Stream added, broadcasting: 1 I1127 21:21:57.231405 7 log.go:172] (0x4003098fd0) Reply frame received for 1 I1127 21:21:57.231540 7 log.go:172] (0x4003098fd0) (0x4002900780) Create stream I1127 21:21:57.231614 7 log.go:172] (0x4003098fd0) (0x4002900780) Stream added, broadcasting: 3 I1127 21:21:57.233216 7 log.go:172] (0x4003098fd0) Reply frame received for 3 I1127 21:21:57.233408 7 log.go:172] (0x4003098fd0) (0x4002900820) Create stream I1127 21:21:57.233516 7 log.go:172] (0x4003098fd0) (0x4002900820) Stream added, broadcasting: 5 I1127 21:21:57.235542 7 log.go:172] (0x4003098fd0) Reply frame received for 5 I1127 21:21:57.307090 7 log.go:172] (0x4003098fd0) Data frame received for 3 I1127 21:21:57.307228 7 log.go:172] (0x4002900780) (3) Data frame handling I1127 21:21:57.307309 7 log.go:172] (0x4003098fd0) Data frame received for 5 I1127 21:21:57.307406 7 log.go:172] (0x4002900820) (5) Data frame handling I1127 21:21:57.307505 7 log.go:172] (0x4002900780) (3) Data frame sent I1127 21:21:57.307575 7 log.go:172] (0x4003098fd0) Data frame received for 3 I1127 21:21:57.307628 7 log.go:172] (0x4002900780) (3) Data frame handling I1127 21:21:57.308147 7 log.go:172] (0x4003098fd0) Data frame received for 1 I1127 21:21:57.308208 7 log.go:172] (0x4003106a00) (1) Data frame handling I1127 21:21:57.308268 7 log.go:172] (0x4003106a00) (1) Data frame sent I1127 21:21:57.308336 7 log.go:172] (0x4003098fd0) (0x4003106a00) Stream removed, broadcasting: 1 I1127 21:21:57.308453 7 log.go:172] (0x4003098fd0) Go away received I1127 21:21:57.308990 7 log.go:172] (0x4003098fd0) (0x4003106a00) Stream removed, broadcasting: 1 I1127 21:21:57.309075 7 log.go:172] (0x4003098fd0) (0x4002900780) Stream removed, broadcasting: 3 I1127 21:21:57.309145 7 log.go:172] (0x4003098fd0) (0x4002900820) Stream removed, broadcasting: 5 Nov 27 21:21:57.309: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Nov 27 21:21:57.309: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8426 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:21:57.309: INFO: >>> kubeConfig: /root/.kube/config I1127 21:21:57.364604 7 log.go:172] (0x4003099810) (0x4003106be0) Create stream I1127 21:21:57.364792 7 log.go:172] (0x4003099810) (0x4003106be0) Stream added, broadcasting: 1 I1127 21:21:57.368647 7 log.go:172] (0x4003099810) Reply frame received for 1 I1127 21:21:57.368970 7 log.go:172] (0x4003099810) (0x400181f720) Create stream I1127 21:21:57.369090 7 log.go:172] (0x4003099810) (0x400181f720) Stream added, broadcasting: 3 I1127 21:21:57.370681 7 log.go:172] (0x4003099810) Reply frame received for 3 I1127 21:21:57.370804 7 log.go:172] (0x4003099810) (0x4003106c80) Create stream I1127 21:21:57.370874 7 log.go:172] (0x4003099810) (0x4003106c80) Stream added, broadcasting: 5 I1127 21:21:57.372322 7 log.go:172] (0x4003099810) Reply frame received for 5 I1127 21:21:57.426267 7 log.go:172] (0x4003099810) Data frame received for 3 I1127 21:21:57.426424 7 log.go:172] (0x400181f720) (3) Data frame handling I1127 21:21:57.426496 7 log.go:172] (0x400181f720) (3) Data frame sent I1127 21:21:57.426553 7 log.go:172] (0x4003099810) Data frame received for 3 I1127 21:21:57.426608 7 log.go:172] (0x400181f720) (3) Data frame handling I1127 21:21:57.426881 7 log.go:172] (0x4003099810) Data frame received for 5 I1127 21:21:57.427110 7 log.go:172] (0x4003106c80) (5) Data frame handling I1127 21:21:57.427514 7 log.go:172] (0x4003099810) Data frame received for 1 I1127 21:21:57.427667 7 log.go:172] (0x4003106be0) (1) Data frame handling I1127 21:21:57.427900 7 log.go:172] (0x4003106be0) (1) Data frame sent I1127 21:21:57.428025 7 log.go:172] (0x4003099810) (0x4003106be0) Stream removed, broadcasting: 1 I1127 21:21:57.428184 7 log.go:172] (0x4003099810) Go away received I1127 21:21:57.428720 7 log.go:172] (0x4003099810) (0x4003106be0) Stream removed, broadcasting: 1 I1127 21:21:57.429047 7 log.go:172] (0x4003099810) (0x400181f720) Stream removed, broadcasting: 3 I1127 21:21:57.429202 7 log.go:172] (0x4003099810) (0x4003106c80) Stream removed, broadcasting: 5 Nov 27 21:21:57.429: INFO: Exec stderr: "" Nov 27 21:21:57.429: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8426 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:21:57.429: INFO: >>> kubeConfig: /root/.kube/config I1127 21:21:57.493676 7 log.go:172] (0x40033240b0) (0x40026295e0) Create stream I1127 21:21:57.493892 7 log.go:172] (0x40033240b0) (0x40026295e0) Stream added, broadcasting: 1 I1127 21:21:57.498503 7 log.go:172] (0x40033240b0) Reply frame received for 1 I1127 21:21:57.498711 7 log.go:172] (0x40033240b0) (0x4002629680) Create stream I1127 21:21:57.498796 7 log.go:172] (0x40033240b0) (0x4002629680) Stream added, broadcasting: 3 I1127 21:21:57.500995 7 log.go:172] (0x40033240b0) Reply frame received for 3 I1127 21:21:57.501236 7 log.go:172] (0x40033240b0) (0x40029008c0) Create stream I1127 21:21:57.501349 7 log.go:172] (0x40033240b0) (0x40029008c0) Stream added, broadcasting: 5 I1127 21:21:57.502549 7 log.go:172] (0x40033240b0) Reply frame received for 5 I1127 21:21:57.566887 7 log.go:172] (0x40033240b0) Data frame received for 5 I1127 21:21:57.567023 7 log.go:172] (0x40029008c0) (5) Data frame handling I1127 21:21:57.567153 7 log.go:172] (0x40033240b0) Data frame received for 3 I1127 21:21:57.567270 7 log.go:172] (0x4002629680) (3) Data frame handling I1127 21:21:57.567405 7 log.go:172] (0x4002629680) (3) Data frame sent I1127 21:21:57.567528 7 log.go:172] (0x40033240b0) Data frame received for 3 I1127 21:21:57.567656 7 log.go:172] (0x4002629680) (3) Data frame handling I1127 21:21:57.568308 7 log.go:172] (0x40033240b0) Data frame received for 1 I1127 21:21:57.568414 7 log.go:172] (0x40026295e0) (1) Data frame handling I1127 21:21:57.568514 7 log.go:172] (0x40026295e0) (1) Data frame sent I1127 21:21:57.568628 7 log.go:172] (0x40033240b0) (0x40026295e0) Stream removed, broadcasting: 1 I1127 21:21:57.568763 7 log.go:172] (0x40033240b0) Go away received I1127 21:21:57.569226 7 log.go:172] (0x40033240b0) (0x40026295e0) Stream removed, broadcasting: 1 I1127 21:21:57.569343 7 log.go:172] (0x40033240b0) (0x4002629680) Stream removed, broadcasting: 3 I1127 21:21:57.569472 7 log.go:172] (0x40033240b0) (0x40029008c0) Stream removed, broadcasting: 5 Nov 27 21:21:57.569: INFO: Exec stderr: "" Nov 27 21:21:57.569: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8426 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:21:57.569: INFO: >>> kubeConfig: /root/.kube/config I1127 21:21:57.638748 7 log.go:172] (0x4003164e70) (0x4002bbd9a0) Create stream I1127 21:21:57.639230 7 log.go:172] (0x4003164e70) (0x4002bbd9a0) Stream added, broadcasting: 1 I1127 21:21:57.644705 7 log.go:172] (0x4003164e70) Reply frame received for 1 I1127 21:21:57.645185 7 log.go:172] (0x4003164e70) (0x4002bbda40) Create stream I1127 21:21:57.645343 7 log.go:172] (0x4003164e70) (0x4002bbda40) Stream added, broadcasting: 3 I1127 21:21:57.647973 7 log.go:172] (0x4003164e70) Reply frame received for 3 I1127 21:21:57.648304 7 log.go:172] (0x4003164e70) (0x4002900960) Create stream I1127 21:21:57.648478 7 log.go:172] (0x4003164e70) (0x4002900960) Stream added, broadcasting: 5 I1127 21:21:57.650213 7 log.go:172] (0x4003164e70) Reply frame received for 5 I1127 21:21:57.704181 7 log.go:172] (0x4003164e70) Data frame received for 3 I1127 21:21:57.704393 7 log.go:172] (0x4002bbda40) (3) Data frame handling I1127 21:21:57.704549 7 log.go:172] (0x4003164e70) Data frame received for 5 I1127 21:21:57.704719 7 log.go:172] (0x4002900960) (5) Data frame handling I1127 21:21:57.704845 7 log.go:172] (0x4002bbda40) (3) Data frame sent I1127 21:21:57.705044 7 log.go:172] (0x4003164e70) Data frame received for 3 I1127 21:21:57.705141 7 log.go:172] (0x4002bbda40) (3) Data frame handling I1127 21:21:57.705681 7 log.go:172] (0x4003164e70) Data frame received for 1 I1127 21:21:57.705782 7 log.go:172] (0x4002bbd9a0) (1) Data frame handling I1127 21:21:57.705890 7 log.go:172] (0x4002bbd9a0) (1) Data frame sent I1127 21:21:57.706110 7 log.go:172] (0x4003164e70) (0x4002bbd9a0) Stream removed, broadcasting: 1 I1127 21:21:57.706244 7 log.go:172] (0x4003164e70) Go away received I1127 21:21:57.706735 7 log.go:172] (0x4003164e70) (0x4002bbd9a0) Stream removed, broadcasting: 1 I1127 21:21:57.706880 7 log.go:172] (0x4003164e70) (0x4002bbda40) Stream removed, broadcasting: 3 I1127 21:21:57.706982 7 log.go:172] (0x4003164e70) (0x4002900960) Stream removed, broadcasting: 5 Nov 27 21:21:57.707: INFO: Exec stderr: "" Nov 27 21:21:57.707: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8426 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:21:57.707: INFO: >>> kubeConfig: /root/.kube/config I1127 21:21:57.776620 7 log.go:172] (0x4000d633f0) (0x400181fb80) Create stream I1127 21:21:57.776800 7 log.go:172] (0x4000d633f0) (0x400181fb80) Stream added, broadcasting: 1 I1127 21:21:57.780107 7 log.go:172] (0x4000d633f0) Reply frame received for 1 I1127 21:21:57.780289 7 log.go:172] (0x4000d633f0) (0x4002900a00) Create stream I1127 21:21:57.780365 7 log.go:172] (0x4000d633f0) (0x4002900a00) Stream added, broadcasting: 3 I1127 21:21:57.781986 7 log.go:172] (0x4000d633f0) Reply frame received for 3 I1127 21:21:57.782161 7 log.go:172] (0x4000d633f0) (0x400181fc20) Create stream I1127 21:21:57.782259 7 log.go:172] (0x4000d633f0) (0x400181fc20) Stream added, broadcasting: 5 I1127 21:21:57.783765 7 log.go:172] (0x4000d633f0) Reply frame received for 5 I1127 21:21:57.845707 7 log.go:172] (0x4000d633f0) Data frame received for 3 I1127 21:21:57.845924 7 log.go:172] (0x4002900a00) (3) Data frame handling I1127 21:21:57.846088 7 log.go:172] (0x4000d633f0) Data frame received for 5 I1127 21:21:57.846303 7 log.go:172] (0x400181fc20) (5) Data frame handling I1127 21:21:57.846476 7 log.go:172] (0x4002900a00) (3) Data frame sent I1127 21:21:57.846633 7 log.go:172] (0x4000d633f0) Data frame received for 3 I1127 21:21:57.846740 7 log.go:172] (0x4002900a00) (3) Data frame handling I1127 21:21:57.847533 7 log.go:172] (0x4000d633f0) Data frame received for 1 I1127 21:21:57.847708 7 log.go:172] (0x400181fb80) (1) Data frame handling I1127 21:21:57.847887 7 log.go:172] (0x400181fb80) (1) Data frame sent I1127 21:21:57.848055 7 log.go:172] (0x4000d633f0) (0x400181fb80) Stream removed, broadcasting: 1 I1127 21:21:57.848215 7 log.go:172] (0x4000d633f0) Go away received I1127 21:21:57.848706 7 log.go:172] (0x4000d633f0) (0x400181fb80) Stream removed, broadcasting: 1 I1127 21:21:57.848996 7 log.go:172] (0x4000d633f0) (0x4002900a00) Stream removed, broadcasting: 3 I1127 21:21:57.849124 7 log.go:172] (0x4000d633f0) (0x400181fc20) Stream removed, broadcasting: 5 Nov 27 21:21:57.849: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:21:57.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8426" for this suite. Nov 27 21:22:47.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:22:48.057: INFO: namespace e2e-kubelet-etc-hosts-8426 deletion completed in 50.199728134s • [SLOW TEST:61.782 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:22:48.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 27 21:22:48.170: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Nov 27 21:22:48.186: INFO: Pod name sample-pod: Found 0 pods out of 1 Nov 27 21:22:53.194: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 27 21:22:53.197: INFO: Creating deployment "test-rolling-update-deployment" Nov 27 21:22:53.205: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Nov 27 21:22:53.272: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Nov 27 21:22:55.289: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Nov 27 21:22:55.364: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742108973, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742108973, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742108973, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742108973, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 27 21:22:57.372: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Nov 27 21:22:57.404: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6899,SelfLink:/apis/apps/v1/namespaces/deployment-6899/deployments/test-rolling-update-deployment,UID:2755fcd5-4cc4-4278-8dad-17091251bbf1,ResourceVersion:11913033,Generation:1,CreationTimestamp:2020-11-27 21:22:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-11-27 21:22:53 +0000 UTC 2020-11-27 21:22:53 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-11-27 21:22:56 +0000 UTC 2020-11-27 21:22:53 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Nov 27 21:22:57.412: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6899,SelfLink:/apis/apps/v1/namespaces/deployment-6899/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:26155ece-fc59-47af-b23c-8fa8dc01167c,ResourceVersion:11913021,Generation:1,CreationTimestamp:2020-11-27 21:22:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 2755fcd5-4cc4-4278-8dad-17091251bbf1 0x4002766517 0x4002766518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Nov 27 21:22:57.412: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Nov 27 21:22:57.414: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6899,SelfLink:/apis/apps/v1/namespaces/deployment-6899/replicasets/test-rolling-update-controller,UID:f7083857-d0c2-484d-921b-cf60ded24835,ResourceVersion:11913031,Generation:2,CreationTimestamp:2020-11-27 21:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 2755fcd5-4cc4-4278-8dad-17091251bbf1 0x400276641f 0x4002766430}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Nov 27 21:22:57.424: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-nvl4d" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-nvl4d,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6899,SelfLink:/api/v1/namespaces/deployment-6899/pods/test-rolling-update-deployment-79f6b9d75c-nvl4d,UID:5d0e3f07-1b4c-430b-a148-bcd469bd4b7c,ResourceVersion:11913020,Generation:0,CreationTimestamp:2020-11-27 21:22:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 26155ece-fc59-47af-b23c-8fa8dc01167c 0x4002766dd7 0x4002766dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k9kv5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k9kv5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-k9kv5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4002766e50} {node.kubernetes.io/unreachable Exists NoExecute 0x4002766e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:22:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:22:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:22:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:22:53 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.236,StartTime:2020-11-27 21:22:53 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-11-27 21:22:55 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://f465b5832fbd2dc17db738ea6c79a6965521e73247ec78a3a709dfe459cdd188}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:22:57.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6899" for this suite. Nov 27 21:23:05.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:23:05.614: INFO: namespace deployment-6899 deletion completed in 8.18182736s • [SLOW TEST:17.555 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:23:05.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 27 21:23:05.678: INFO: Creating deployment "test-recreate-deployment" Nov 27 21:23:05.689: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Nov 27 21:23:05.709: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Nov 27 21:23:07.722: INFO: Waiting deployment "test-recreate-deployment" to complete Nov 27 21:23:07.727: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742108985, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742108985, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742108985, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742108985, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 27 21:23:09.734: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Nov 27 21:23:09.749: INFO: Updating deployment test-recreate-deployment Nov 27 21:23:09.749: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Nov 27 21:23:09.991: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-6080,SelfLink:/apis/apps/v1/namespaces/deployment-6080/deployments/test-recreate-deployment,UID:51161fd6-3d0b-4b75-a12c-bd13941cc735,ResourceVersion:11913126,Generation:2,CreationTimestamp:2020-11-27 21:23:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-11-27 21:23:09 +0000 UTC 2020-11-27 21:23:09 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-11-27 21:23:09 +0000 UTC 2020-11-27 21:23:05 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Nov 27 21:23:10.056: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-6080,SelfLink:/apis/apps/v1/namespaces/deployment-6080/replicasets/test-recreate-deployment-5c8c9cc69d,UID:502cf699-f625-45b6-a33e-8c5ced51cc49,ResourceVersion:11913123,Generation:1,CreationTimestamp:2020-11-27 21:23:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 51161fd6-3d0b-4b75-a12c-bd13941cc735 0x400335ecf7 0x400335ecf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Nov 27 21:23:10.056: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Nov 27 21:23:10.057: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-6080,SelfLink:/apis/apps/v1/namespaces/deployment-6080/replicasets/test-recreate-deployment-6df85df6b9,UID:67b6a5e7-28ac-4d9d-9acf-cde364a7c154,ResourceVersion:11913115,Generation:2,CreationTimestamp:2020-11-27 21:23:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 51161fd6-3d0b-4b75-a12c-bd13941cc735 0x400335edd7 0x400335edd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Nov 27 21:23:10.067: INFO: Pod "test-recreate-deployment-5c8c9cc69d-gkwvx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-gkwvx,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-6080,SelfLink:/api/v1/namespaces/deployment-6080/pods/test-recreate-deployment-5c8c9cc69d-gkwvx,UID:530946e1-7e39-4657-bb22-3b6fb78ba3d3,ResourceVersion:11913127,Generation:0,CreationTimestamp:2020-11-27 21:23:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 502cf699-f625-45b6-a33e-8c5ced51cc49 0x400335f687 0x400335f688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xnlpp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xnlpp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xnlpp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400335f700} {node.kubernetes.io/unreachable Exists NoExecute 0x400335f720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:23:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:23:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:23:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:23:09 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-27 21:23:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:23:10.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6080" for this suite. Nov 27 21:23:16.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:23:16.431: INFO: namespace deployment-6080 deletion completed in 6.355358078s • [SLOW TEST:10.813 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:23:16.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Nov 27 21:23:16.521: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 27 21:23:16.549: INFO: Waiting for terminating namespaces to be deleted... Nov 27 21:23:16.554: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Nov 27 21:23:16.565: INFO: chaos-controller-manager-6c68f56f79-dmwmx from default started at 2020-11-23 00:43:52 +0000 UTC (1 container statuses recorded) Nov 27 21:23:16.565: INFO: Container chaos-mesh ready: true, restart count 0 Nov 27 21:23:16.565: INFO: chaos-daemon-m4wrh from default started at 2020-11-23 00:43:52 +0000 UTC (1 container statuses recorded) Nov 27 21:23:16.565: INFO: Container chaos-daemon ready: true, restart count 0 Nov 27 21:23:16.565: INFO: kube-proxy-mtljr from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 27 21:23:16.565: INFO: Container kube-proxy ready: true, restart count 0 Nov 27 21:23:16.565: INFO: kindnet-7bsvw from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 27 21:23:16.565: INFO: Container kindnet-cni ready: true, restart count 0 Nov 27 21:23:16.565: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Nov 27 21:23:16.576: INFO: kindnet-djqgh from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 27 21:23:16.576: INFO: Container kindnet-cni ready: true, restart count 0 Nov 27 21:23:16.576: INFO: chaos-daemon-fcg7h from default started at 2020-11-23 00:43:52 +0000 UTC (1 container statuses recorded) Nov 27 21:23:16.577: INFO: Container chaos-daemon ready: true, restart count 0 Nov 27 21:23:16.577: INFO: kube-proxy-52wt5 from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 27 21:23:16.577: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.164b7a54c7b62da4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:23:17.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2386" for this suite. Nov 27 21:23:23.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:23:23.870: INFO: namespace sched-pred-2386 deletion completed in 6.214401623s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.439 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:23:23.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Nov 27 21:23:23.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4379' Nov 27 21:23:25.243: INFO: stderr: "" Nov 27 21:23:25.243: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Nov 27 21:23:25.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4379' Nov 27 21:23:35.646: INFO: stderr: "" Nov 27 21:23:35.646: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:23:35.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4379" for this suite. Nov 27 21:23:41.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:23:41.881: INFO: namespace kubectl-4379 deletion completed in 6.203548976s • [SLOW TEST:18.009 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:23:41.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-3cdcb86e-d70a-4054-a46f-b5a4ec3b4a79 in namespace container-probe-8652 Nov 27 21:23:45.970: INFO: Started pod busybox-3cdcb86e-d70a-4054-a46f-b5a4ec3b4a79 in namespace container-probe-8652 STEP: checking the pod's current state and verifying that restartCount is present Nov 27 21:23:45.976: INFO: Initial restart count of pod busybox-3cdcb86e-d70a-4054-a46f-b5a4ec3b4a79 is 0 Nov 27 21:24:34.151: INFO: Restart count of pod container-probe-8652/busybox-3cdcb86e-d70a-4054-a46f-b5a4ec3b4a79 is now 1 (48.175437487s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:24:34.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8652" for this suite. Nov 27 21:24:40.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:24:40.465: INFO: namespace container-probe-8652 deletion completed in 6.200929367s • [SLOW TEST:58.582 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:24:40.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:24:44.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-338" for this suite. Nov 27 21:25:22.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:25:22.819: INFO: namespace kubelet-test-338 deletion completed in 38.192173165s • [SLOW TEST:42.351 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:25:22.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Nov 27 21:25:22.930: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8590,SelfLink:/api/v1/namespaces/watch-8590/configmaps/e2e-watch-test-watch-closed,UID:3befe2c4-2a43-4c10-84ea-af5d4a14b757,ResourceVersion:11913509,Generation:0,CreationTimestamp:2020-11-27 21:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Nov 27 21:25:22.932: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8590,SelfLink:/api/v1/namespaces/watch-8590/configmaps/e2e-watch-test-watch-closed,UID:3befe2c4-2a43-4c10-84ea-af5d4a14b757,ResourceVersion:11913510,Generation:0,CreationTimestamp:2020-11-27 21:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Nov 27 21:25:22.955: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8590,SelfLink:/api/v1/namespaces/watch-8590/configmaps/e2e-watch-test-watch-closed,UID:3befe2c4-2a43-4c10-84ea-af5d4a14b757,ResourceVersion:11913511,Generation:0,CreationTimestamp:2020-11-27 21:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Nov 27 21:25:22.956: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8590,SelfLink:/api/v1/namespaces/watch-8590/configmaps/e2e-watch-test-watch-closed,UID:3befe2c4-2a43-4c10-84ea-af5d4a14b757,ResourceVersion:11913512,Generation:0,CreationTimestamp:2020-11-27 21:25:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:25:22.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8590" for this suite. Nov 27 21:25:29.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:25:29.161: INFO: namespace watch-8590 deletion completed in 6.171601177s • [SLOW TEST:6.340 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:25:29.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-2baf5505-7b00-4a16-8a48-19381ff69a40 STEP: Creating a pod to test consume configMaps Nov 27 21:25:29.321: INFO: Waiting up to 5m0s for pod "pod-configmaps-8af11cbf-9039-4abc-a079-ef0f19319931" in namespace "configmap-3834" to be "success or failure" Nov 27 21:25:29.445: INFO: Pod "pod-configmaps-8af11cbf-9039-4abc-a079-ef0f19319931": Phase="Pending", Reason="", readiness=false. Elapsed: 123.849575ms Nov 27 21:25:31.451: INFO: Pod "pod-configmaps-8af11cbf-9039-4abc-a079-ef0f19319931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130265205s Nov 27 21:25:33.463: INFO: Pod "pod-configmaps-8af11cbf-9039-4abc-a079-ef0f19319931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141599863s STEP: Saw pod success Nov 27 21:25:33.463: INFO: Pod "pod-configmaps-8af11cbf-9039-4abc-a079-ef0f19319931" satisfied condition "success or failure" Nov 27 21:25:33.467: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-8af11cbf-9039-4abc-a079-ef0f19319931 container configmap-volume-test: STEP: delete the pod Nov 27 21:25:33.542: INFO: Waiting for pod pod-configmaps-8af11cbf-9039-4abc-a079-ef0f19319931 to disappear Nov 27 21:25:33.853: INFO: Pod pod-configmaps-8af11cbf-9039-4abc-a079-ef0f19319931 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:25:33.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3834" for this suite. Nov 27 21:25:39.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:25:40.051: INFO: namespace configmap-3834 deletion completed in 6.189867847s • [SLOW TEST:10.887 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:25:40.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 27 21:25:40.230: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:40.245: INFO: Number of nodes with available pods: 0 Nov 27 21:25:40.246: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:25:41.256: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:41.262: INFO: Number of nodes with available pods: 0 Nov 27 21:25:41.262: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:25:42.261: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:42.267: INFO: Number of nodes with available pods: 0 Nov 27 21:25:42.267: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:25:43.257: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:43.265: INFO: Number of nodes with available pods: 0 Nov 27 21:25:43.265: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:25:44.299: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:44.305: INFO: Number of nodes with available pods: 2 Nov 27 21:25:44.305: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Nov 27 21:25:44.353: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:44.359: INFO: Number of nodes with available pods: 1 Nov 27 21:25:44.359: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:25:45.372: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:45.378: INFO: Number of nodes with available pods: 1 Nov 27 21:25:45.379: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:25:46.370: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:46.375: INFO: Number of nodes with available pods: 1 Nov 27 21:25:46.375: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:25:47.370: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:47.376: INFO: Number of nodes with available pods: 1 Nov 27 21:25:47.376: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:25:48.369: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:48.375: INFO: Number of nodes with available pods: 1 Nov 27 21:25:48.375: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:25:49.372: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:49.379: INFO: Number of nodes with available pods: 1 Nov 27 21:25:49.379: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:25:50.368: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:50.374: INFO: Number of nodes with available pods: 1 Nov 27 21:25:50.374: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:25:51.371: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:25:51.379: INFO: Number of nodes with available pods: 2 Nov 27 21:25:51.379: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5765, will wait for the garbage collector to delete the pods Nov 27 21:25:51.452: INFO: Deleting DaemonSet.extensions daemon-set took: 10.413263ms Nov 27 21:25:51.754: INFO: Terminating DaemonSet.extensions daemon-set pods took: 302.410329ms Nov 27 21:26:05.461: INFO: Number of nodes with available pods: 0 Nov 27 21:26:05.461: INFO: Number of running nodes: 0, number of available pods: 0 Nov 27 21:26:05.485: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5765/daemonsets","resourceVersion":"11913685"},"items":null} Nov 27 21:26:05.490: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5765/pods","resourceVersion":"11913685"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:26:05.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5765" for this suite. Nov 27 21:26:11.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:26:11.688: INFO: namespace daemonsets-5765 deletion completed in 6.166634492s • [SLOW TEST:31.634 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:26:11.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-722 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-722 STEP: Deleting pre-stop pod Nov 27 21:26:24.865: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:26:24.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-722" for this suite. Nov 27 21:27:06.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:27:07.092: INFO: namespace prestop-722 deletion completed in 42.210964184s • [SLOW TEST:55.403 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:27:07.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Nov 27 21:27:07.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Nov 27 21:27:08.540: INFO: stderr: "" Nov 27 21:27:08.540: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:27:08.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4631" for this suite. Nov 27 21:27:14.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:27:14.738: INFO: namespace kubectl-4631 deletion completed in 6.187097227s • [SLOW TEST:7.639 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:27:14.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 27 21:27:14.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4626a2a0-ba0c-4a54-a2b6-b3db2a4c60ac" in namespace "projected-7689" to be "success or failure" Nov 27 21:27:14.882: INFO: Pod "downwardapi-volume-4626a2a0-ba0c-4a54-a2b6-b3db2a4c60ac": Phase="Pending", Reason="", readiness=false. Elapsed: 46.153893ms Nov 27 21:27:16.889: INFO: Pod "downwardapi-volume-4626a2a0-ba0c-4a54-a2b6-b3db2a4c60ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052860568s Nov 27 21:27:18.897: INFO: Pod "downwardapi-volume-4626a2a0-ba0c-4a54-a2b6-b3db2a4c60ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06026972s STEP: Saw pod success Nov 27 21:27:18.897: INFO: Pod "downwardapi-volume-4626a2a0-ba0c-4a54-a2b6-b3db2a4c60ac" satisfied condition "success or failure" Nov 27 21:27:18.903: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4626a2a0-ba0c-4a54-a2b6-b3db2a4c60ac container client-container: STEP: delete the pod Nov 27 21:27:18.943: INFO: Waiting for pod downwardapi-volume-4626a2a0-ba0c-4a54-a2b6-b3db2a4c60ac to disappear Nov 27 21:27:18.986: INFO: Pod downwardapi-volume-4626a2a0-ba0c-4a54-a2b6-b3db2a4c60ac no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:27:18.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7689" for this suite. Nov 27 21:27:25.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:27:25.210: INFO: namespace projected-7689 deletion completed in 6.213896869s • [SLOW TEST:10.470 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:27:25.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:27:29.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5502" for this suite. Nov 27 21:27:35.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:27:35.679: INFO: namespace kubelet-test-5502 deletion completed in 6.242651384s • [SLOW TEST:10.466 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:27:35.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Nov 27 21:27:35.805: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2542,SelfLink:/api/v1/namespaces/watch-2542/configmaps/e2e-watch-test-configmap-a,UID:f6db983c-5fad-4815-993c-b954709fa9d8,ResourceVersion:11913989,Generation:0,CreationTimestamp:2020-11-27 21:27:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Nov 27 21:27:35.806: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2542,SelfLink:/api/v1/namespaces/watch-2542/configmaps/e2e-watch-test-configmap-a,UID:f6db983c-5fad-4815-993c-b954709fa9d8,ResourceVersion:11913989,Generation:0,CreationTimestamp:2020-11-27 21:27:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Nov 27 21:27:45.817: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2542,SelfLink:/api/v1/namespaces/watch-2542/configmaps/e2e-watch-test-configmap-a,UID:f6db983c-5fad-4815-993c-b954709fa9d8,ResourceVersion:11914009,Generation:0,CreationTimestamp:2020-11-27 21:27:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Nov 27 21:27:45.818: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2542,SelfLink:/api/v1/namespaces/watch-2542/configmaps/e2e-watch-test-configmap-a,UID:f6db983c-5fad-4815-993c-b954709fa9d8,ResourceVersion:11914009,Generation:0,CreationTimestamp:2020-11-27 21:27:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Nov 27 21:27:55.828: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2542,SelfLink:/api/v1/namespaces/watch-2542/configmaps/e2e-watch-test-configmap-a,UID:f6db983c-5fad-4815-993c-b954709fa9d8,ResourceVersion:11914030,Generation:0,CreationTimestamp:2020-11-27 21:27:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Nov 27 21:27:55.829: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2542,SelfLink:/api/v1/namespaces/watch-2542/configmaps/e2e-watch-test-configmap-a,UID:f6db983c-5fad-4815-993c-b954709fa9d8,ResourceVersion:11914030,Generation:0,CreationTimestamp:2020-11-27 21:27:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Nov 27 21:28:05.837: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2542,SelfLink:/api/v1/namespaces/watch-2542/configmaps/e2e-watch-test-configmap-a,UID:f6db983c-5fad-4815-993c-b954709fa9d8,ResourceVersion:11914050,Generation:0,CreationTimestamp:2020-11-27 21:27:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Nov 27 21:28:05.838: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-2542,SelfLink:/api/v1/namespaces/watch-2542/configmaps/e2e-watch-test-configmap-a,UID:f6db983c-5fad-4815-993c-b954709fa9d8,ResourceVersion:11914050,Generation:0,CreationTimestamp:2020-11-27 21:27:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Nov 27 21:28:15.847: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2542,SelfLink:/api/v1/namespaces/watch-2542/configmaps/e2e-watch-test-configmap-b,UID:6dd40acf-1d8a-4c03-a320-f97200a7c311,ResourceVersion:11914070,Generation:0,CreationTimestamp:2020-11-27 21:28:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Nov 27 21:28:15.848: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2542,SelfLink:/api/v1/namespaces/watch-2542/configmaps/e2e-watch-test-configmap-b,UID:6dd40acf-1d8a-4c03-a320-f97200a7c311,ResourceVersion:11914070,Generation:0,CreationTimestamp:2020-11-27 21:28:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Nov 27 21:28:25.856: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2542,SelfLink:/api/v1/namespaces/watch-2542/configmaps/e2e-watch-test-configmap-b,UID:6dd40acf-1d8a-4c03-a320-f97200a7c311,ResourceVersion:11914090,Generation:0,CreationTimestamp:2020-11-27 21:28:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Nov 27 21:28:25.857: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-2542,SelfLink:/api/v1/namespaces/watch-2542/configmaps/e2e-watch-test-configmap-b,UID:6dd40acf-1d8a-4c03-a320-f97200a7c311,ResourceVersion:11914090,Generation:0,CreationTimestamp:2020-11-27 21:28:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:28:35.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2542" for this suite. Nov 27 21:28:41.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:28:42.071: INFO: namespace watch-2542 deletion completed in 6.201043566s • [SLOW TEST:66.388 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:28:42.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-811ea373-879f-4515-9c1c-1346ca92779d Nov 27 21:28:42.251: INFO: Pod name my-hostname-basic-811ea373-879f-4515-9c1c-1346ca92779d: Found 0 pods out of 1 Nov 27 21:28:47.259: INFO: Pod name my-hostname-basic-811ea373-879f-4515-9c1c-1346ca92779d: Found 1 pods out of 1 Nov 27 21:28:47.260: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-811ea373-879f-4515-9c1c-1346ca92779d" are running Nov 27 21:28:47.265: INFO: Pod "my-hostname-basic-811ea373-879f-4515-9c1c-1346ca92779d-jzm4r" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-27 21:28:42 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-27 21:28:45 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-27 21:28:45 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-27 21:28:42 +0000 UTC Reason: Message:}]) Nov 27 21:28:47.265: INFO: Trying to dial the pod Nov 27 21:28:52.297: INFO: Controller my-hostname-basic-811ea373-879f-4515-9c1c-1346ca92779d: Got expected result from replica 1 [my-hostname-basic-811ea373-879f-4515-9c1c-1346ca92779d-jzm4r]: "my-hostname-basic-811ea373-879f-4515-9c1c-1346ca92779d-jzm4r", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:28:52.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-560" for this suite. Nov 27 21:28:58.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:28:58.504: INFO: namespace replication-controller-560 deletion completed in 6.199538255s • [SLOW TEST:16.431 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:28:58.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 27 21:29:02.704: INFO: Waiting up to 5m0s for pod "client-envvars-c1512e8b-d535-42ef-b503-4a4b391a7b9b" in namespace "pods-8772" to be "success or failure" Nov 27 21:29:02.722: INFO: Pod "client-envvars-c1512e8b-d535-42ef-b503-4a4b391a7b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.226855ms Nov 27 21:29:04.729: INFO: Pod "client-envvars-c1512e8b-d535-42ef-b503-4a4b391a7b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024596637s Nov 27 21:29:06.735: INFO: Pod "client-envvars-c1512e8b-d535-42ef-b503-4a4b391a7b9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031282034s STEP: Saw pod success Nov 27 21:29:06.736: INFO: Pod "client-envvars-c1512e8b-d535-42ef-b503-4a4b391a7b9b" satisfied condition "success or failure" Nov 27 21:29:06.740: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-c1512e8b-d535-42ef-b503-4a4b391a7b9b container env3cont: STEP: delete the pod Nov 27 21:29:06.759: INFO: Waiting for pod client-envvars-c1512e8b-d535-42ef-b503-4a4b391a7b9b to disappear Nov 27 21:29:06.765: INFO: Pod client-envvars-c1512e8b-d535-42ef-b503-4a4b391a7b9b no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:29:06.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8772" for this suite. Nov 27 21:29:56.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:29:56.990: INFO: namespace pods-8772 deletion completed in 50.216763768s • [SLOW TEST:58.483 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:29:56.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-5d13da06-7b05-42d2-8b26-6a419dd09f8e in namespace container-probe-4465 Nov 27 21:30:01.090: INFO: Started pod liveness-5d13da06-7b05-42d2-8b26-6a419dd09f8e in namespace container-probe-4465 STEP: checking the pod's current state and verifying that restartCount is present Nov 27 21:30:01.096: INFO: Initial restart count of pod liveness-5d13da06-7b05-42d2-8b26-6a419dd09f8e is 0 Nov 27 21:30:23.375: INFO: Restart count of pod container-probe-4465/liveness-5d13da06-7b05-42d2-8b26-6a419dd09f8e is now 1 (22.27866107s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:30:23.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4465" for this suite. Nov 27 21:30:29.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:30:29.978: INFO: namespace container-probe-4465 deletion completed in 6.457974016s • [SLOW TEST:32.986 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:30:29.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e211148d-1f4d-4fca-aaca-2c3589a2807b STEP: Creating a pod to test consume secrets Nov 27 21:30:30.148: INFO: Waiting up to 5m0s for pod "pod-secrets-743b304b-f5f0-4336-85c0-5d8ed1888b9b" in namespace "secrets-6167" to be "success or failure" Nov 27 21:30:30.230: INFO: Pod "pod-secrets-743b304b-f5f0-4336-85c0-5d8ed1888b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 81.895187ms Nov 27 21:30:32.237: INFO: Pod "pod-secrets-743b304b-f5f0-4336-85c0-5d8ed1888b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088971002s Nov 27 21:30:34.244: INFO: Pod "pod-secrets-743b304b-f5f0-4336-85c0-5d8ed1888b9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0959541s STEP: Saw pod success Nov 27 21:30:34.244: INFO: Pod "pod-secrets-743b304b-f5f0-4336-85c0-5d8ed1888b9b" satisfied condition "success or failure" Nov 27 21:30:34.249: INFO: Trying to get logs from node iruya-worker pod pod-secrets-743b304b-f5f0-4336-85c0-5d8ed1888b9b container secret-volume-test: STEP: delete the pod Nov 27 21:30:34.393: INFO: Waiting for pod pod-secrets-743b304b-f5f0-4336-85c0-5d8ed1888b9b to disappear Nov 27 21:30:34.458: INFO: Pod pod-secrets-743b304b-f5f0-4336-85c0-5d8ed1888b9b no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:30:34.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6167" for this suite. Nov 27 21:30:40.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:30:40.693: INFO: namespace secrets-6167 deletion completed in 6.223343712s • [SLOW TEST:10.715 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:30:40.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 27 21:30:40.852: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:30:40.855: INFO: Number of nodes with available pods: 0 Nov 27 21:30:40.856: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:30:41.961: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:30:41.966: INFO: Number of nodes with available pods: 0 Nov 27 21:30:41.966: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:30:42.866: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:30:42.874: INFO: Number of nodes with available pods: 0 Nov 27 21:30:42.874: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:30:44.006: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:30:44.012: INFO: Number of nodes with available pods: 1 Nov 27 21:30:44.012: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:30:44.864: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:30:44.882: INFO: Number of nodes with available pods: 1 Nov 27 21:30:44.882: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:30:45.868: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:30:45.873: INFO: Number of nodes with available pods: 2 Nov 27 21:30:45.873: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Nov 27 21:30:45.925: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:30:45.940: INFO: Number of nodes with available pods: 1 Nov 27 21:30:45.940: INFO: Node iruya-worker2 is running more than one daemon pod Nov 27 21:30:46.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:30:46.961: INFO: Number of nodes with available pods: 1 Nov 27 21:30:46.961: INFO: Node iruya-worker2 is running more than one daemon pod Nov 27 21:30:47.953: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:30:47.960: INFO: Number of nodes with available pods: 1 Nov 27 21:30:47.960: INFO: Node iruya-worker2 is running more than one daemon pod Nov 27 21:30:48.953: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:30:48.960: INFO: Number of nodes with available pods: 2 Nov 27 21:30:48.960: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9864, will wait for the garbage collector to delete the pods Nov 27 21:30:49.033: INFO: Deleting DaemonSet.extensions daemon-set took: 8.889867ms Nov 27 21:30:49.334: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.750705ms Nov 27 21:30:55.739: INFO: Number of nodes with available pods: 0 Nov 27 21:30:55.739: INFO: Number of running nodes: 0, number of available pods: 0 Nov 27 21:30:55.742: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9864/daemonsets","resourceVersion":"11914580"},"items":null} Nov 27 21:30:55.745: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9864/pods","resourceVersion":"11914580"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:30:55.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9864" for this suite. Nov 27 21:31:01.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:31:01.944: INFO: namespace daemonsets-9864 deletion completed in 6.17390363s • [SLOW TEST:21.250 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:31:01.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4346.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4346.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4346.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4346.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4346.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4346.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 27 21:31:08.152: INFO: DNS probes using dns-4346/dns-test-4ddaff51-8f41-4eb0-8a43-c344356b86a6 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:31:08.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4346" for this suite. Nov 27 21:31:14.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:31:14.481: INFO: namespace dns-4346 deletion completed in 6.278566078s • [SLOW TEST:12.535 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:31:14.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 27 21:31:14.570: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d995bd82-65ac-434c-b27d-a245b5787d66" in namespace "projected-8129" to be "success or failure" Nov 27 21:31:14.598: INFO: Pod "downwardapi-volume-d995bd82-65ac-434c-b27d-a245b5787d66": Phase="Pending", Reason="", readiness=false. Elapsed: 27.734968ms Nov 27 21:31:16.605: INFO: Pod "downwardapi-volume-d995bd82-65ac-434c-b27d-a245b5787d66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034732727s Nov 27 21:31:18.614: INFO: Pod "downwardapi-volume-d995bd82-65ac-434c-b27d-a245b5787d66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043163705s STEP: Saw pod success Nov 27 21:31:18.614: INFO: Pod "downwardapi-volume-d995bd82-65ac-434c-b27d-a245b5787d66" satisfied condition "success or failure" Nov 27 21:31:18.658: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d995bd82-65ac-434c-b27d-a245b5787d66 container client-container: STEP: delete the pod Nov 27 21:31:18.698: INFO: Waiting for pod downwardapi-volume-d995bd82-65ac-434c-b27d-a245b5787d66 to disappear Nov 27 21:31:18.705: INFO: Pod downwardapi-volume-d995bd82-65ac-434c-b27d-a245b5787d66 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:31:18.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8129" for this suite. Nov 27 21:31:24.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:31:24.891: INFO: namespace projected-8129 deletion completed in 6.176725345s • [SLOW TEST:10.409 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:31:24.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 27 21:31:25.042: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:31:29.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2716" for this suite. Nov 27 21:32:17.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:32:17.328: INFO: namespace pods-2716 deletion completed in 48.18652182s • [SLOW TEST:52.435 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:32:17.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b9a4cc2a-56d8-4e4a-a595-9392a8fe978e STEP: Creating a pod to test consume secrets Nov 27 21:32:17.440: INFO: Waiting up to 5m0s for pod "pod-secrets-d8491faa-2f66-4839-bafc-e82fde76ca26" in namespace "secrets-9651" to be "success or failure" Nov 27 21:32:17.448: INFO: Pod "pod-secrets-d8491faa-2f66-4839-bafc-e82fde76ca26": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044231ms Nov 27 21:32:19.454: INFO: Pod "pod-secrets-d8491faa-2f66-4839-bafc-e82fde76ca26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013747756s Nov 27 21:32:21.461: INFO: Pod "pod-secrets-d8491faa-2f66-4839-bafc-e82fde76ca26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020666383s STEP: Saw pod success Nov 27 21:32:21.461: INFO: Pod "pod-secrets-d8491faa-2f66-4839-bafc-e82fde76ca26" satisfied condition "success or failure" Nov 27 21:32:21.466: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d8491faa-2f66-4839-bafc-e82fde76ca26 container secret-volume-test: STEP: delete the pod Nov 27 21:32:21.504: INFO: Waiting for pod pod-secrets-d8491faa-2f66-4839-bafc-e82fde76ca26 to disappear Nov 27 21:32:21.520: INFO: Pod pod-secrets-d8491faa-2f66-4839-bafc-e82fde76ca26 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:32:21.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9651" for this suite. Nov 27 21:32:27.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:32:27.734: INFO: namespace secrets-9651 deletion completed in 6.204621228s • [SLOW TEST:10.403 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:32:27.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Nov 27 21:32:27.809: INFO: Waiting up to 5m0s for pod "pod-3370f58b-dece-49bd-8a95-e1155d1e0c3c" in namespace "emptydir-8366" to be "success or failure" Nov 27 21:32:27.818: INFO: Pod "pod-3370f58b-dece-49bd-8a95-e1155d1e0c3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.981274ms Nov 27 21:32:29.829: INFO: Pod "pod-3370f58b-dece-49bd-8a95-e1155d1e0c3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019788204s Nov 27 21:32:31.836: INFO: Pod "pod-3370f58b-dece-49bd-8a95-e1155d1e0c3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02677717s STEP: Saw pod success Nov 27 21:32:31.836: INFO: Pod "pod-3370f58b-dece-49bd-8a95-e1155d1e0c3c" satisfied condition "success or failure" Nov 27 21:32:31.840: INFO: Trying to get logs from node iruya-worker pod pod-3370f58b-dece-49bd-8a95-e1155d1e0c3c container test-container: STEP: delete the pod Nov 27 21:32:31.871: INFO: Waiting for pod pod-3370f58b-dece-49bd-8a95-e1155d1e0c3c to disappear Nov 27 21:32:31.890: INFO: Pod pod-3370f58b-dece-49bd-8a95-e1155d1e0c3c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:32:31.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8366" for this suite. Nov 27 21:32:37.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:32:38.068: INFO: namespace emptydir-8366 deletion completed in 6.168827542s • [SLOW TEST:10.332 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:32:38.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 27 21:32:38.148: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:32:42.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-599" for this suite. Nov 27 21:33:28.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:33:28.566: INFO: namespace pods-599 deletion completed in 46.192227085s • [SLOW TEST:50.497 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:33:28.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-2b1dbd51-e12f-457b-b669-591e20370f0f STEP: Creating a pod to test consume configMaps Nov 27 21:33:28.658: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9d21125c-c014-4325-bf55-8669bfe97104" in namespace "projected-705" to be "success or failure" Nov 27 21:33:28.670: INFO: Pod "pod-projected-configmaps-9d21125c-c014-4325-bf55-8669bfe97104": Phase="Pending", Reason="", readiness=false. Elapsed: 12.073908ms Nov 27 21:33:30.728: INFO: Pod "pod-projected-configmaps-9d21125c-c014-4325-bf55-8669bfe97104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069838123s Nov 27 21:33:32.735: INFO: Pod "pod-projected-configmaps-9d21125c-c014-4325-bf55-8669bfe97104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076810687s STEP: Saw pod success Nov 27 21:33:32.735: INFO: Pod "pod-projected-configmaps-9d21125c-c014-4325-bf55-8669bfe97104" satisfied condition "success or failure" Nov 27 21:33:32.739: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-9d21125c-c014-4325-bf55-8669bfe97104 container projected-configmap-volume-test: STEP: delete the pod Nov 27 21:33:32.768: INFO: Waiting for pod pod-projected-configmaps-9d21125c-c014-4325-bf55-8669bfe97104 to disappear Nov 27 21:33:32.800: INFO: Pod pod-projected-configmaps-9d21125c-c014-4325-bf55-8669bfe97104 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:33:32.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-705" for this suite. Nov 27 21:33:38.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:33:39.042: INFO: namespace projected-705 deletion completed in 6.231345937s • [SLOW TEST:10.474 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:33:39.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Nov 27 21:33:39.108: INFO: namespace kubectl-1768 Nov 27 21:33:39.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1768' Nov 27 21:33:43.411: INFO: stderr: "" Nov 27 21:33:43.411: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Nov 27 21:33:44.419: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:33:44.420: INFO: Found 0 / 1 Nov 27 21:33:45.420: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:33:45.420: INFO: Found 0 / 1 Nov 27 21:33:46.419: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:33:46.419: INFO: Found 0 / 1 Nov 27 21:33:47.418: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:33:47.418: INFO: Found 1 / 1 Nov 27 21:33:47.418: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 27 21:33:47.424: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:33:47.424: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 27 21:33:47.424: INFO: wait on redis-master startup in kubectl-1768 Nov 27 21:33:47.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-l74qb redis-master --namespace=kubectl-1768' Nov 27 21:33:48.701: INFO: stderr: "" Nov 27 21:33:48.701: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Nov 21:33:46.228 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Nov 21:33:46.228 # Server started, Redis version 3.2.12\n1:M 27 Nov 21:33:46.229 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Nov 21:33:46.229 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Nov 27 21:33:48.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1768' Nov 27 21:33:50.079: INFO: stderr: "" Nov 27 21:33:50.079: INFO: stdout: "service/rm2 exposed\n" Nov 27 21:33:50.085: INFO: Service rm2 in namespace kubectl-1768 found. STEP: exposing service Nov 27 21:33:52.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1768' Nov 27 21:33:53.456: INFO: stderr: "" Nov 27 21:33:53.457: INFO: stdout: "service/rm3 exposed\n" Nov 27 21:33:53.482: INFO: Service rm3 in namespace kubectl-1768 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:33:55.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1768" for this suite. Nov 27 21:34:19.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:34:19.709: INFO: namespace kubectl-1768 deletion completed in 24.210005426s • [SLOW TEST:40.667 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:34:19.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-6760124e-00af-4400-9645-a71a72010a5e [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:34:19.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3178" for this suite. Nov 27 21:34:25.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:34:26.018: INFO: namespace secrets-3178 deletion completed in 6.196253868s • [SLOW TEST:6.306 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:34:26.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Nov 27 21:34:26.160: INFO: Waiting up to 5m0s for pod "pod-18712bab-30ca-429a-9420-c05ba0363c20" in namespace "emptydir-7829" to be "success or failure" Nov 27 21:34:26.176: INFO: Pod "pod-18712bab-30ca-429a-9420-c05ba0363c20": Phase="Pending", Reason="", readiness=false. Elapsed: 15.75751ms Nov 27 21:34:28.183: INFO: Pod "pod-18712bab-30ca-429a-9420-c05ba0363c20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022692734s Nov 27 21:34:30.190: INFO: Pod "pod-18712bab-30ca-429a-9420-c05ba0363c20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029935171s STEP: Saw pod success Nov 27 21:34:30.190: INFO: Pod "pod-18712bab-30ca-429a-9420-c05ba0363c20" satisfied condition "success or failure" Nov 27 21:34:30.194: INFO: Trying to get logs from node iruya-worker pod pod-18712bab-30ca-429a-9420-c05ba0363c20 container test-container: STEP: delete the pod Nov 27 21:34:30.248: INFO: Waiting for pod pod-18712bab-30ca-429a-9420-c05ba0363c20 to disappear Nov 27 21:34:30.259: INFO: Pod pod-18712bab-30ca-429a-9420-c05ba0363c20 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:34:30.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7829" for this suite. Nov 27 21:34:36.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:34:36.500: INFO: namespace emptydir-7829 deletion completed in 6.234202578s • [SLOW TEST:10.480 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:34:36.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Nov 27 21:34:44.643: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 27 21:34:44.656: INFO: Pod pod-with-prestop-exec-hook still exists Nov 27 21:34:46.657: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 27 21:34:46.663: INFO: Pod pod-with-prestop-exec-hook still exists Nov 27 21:34:48.657: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 27 21:34:48.663: INFO: Pod pod-with-prestop-exec-hook still exists Nov 27 21:34:50.657: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 27 21:34:50.664: INFO: Pod pod-with-prestop-exec-hook still exists Nov 27 21:34:52.657: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 27 21:34:52.664: INFO: Pod pod-with-prestop-exec-hook still exists Nov 27 21:34:54.657: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 27 21:34:54.664: INFO: Pod pod-with-prestop-exec-hook still exists Nov 27 21:34:56.657: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 27 21:34:56.664: INFO: Pod pod-with-prestop-exec-hook still exists Nov 27 21:34:58.657: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 27 21:34:58.663: INFO: Pod pod-with-prestop-exec-hook still exists Nov 27 21:35:00.657: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 27 21:35:00.664: INFO: Pod pod-with-prestop-exec-hook still exists Nov 27 21:35:02.657: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 27 21:35:02.664: INFO: Pod pod-with-prestop-exec-hook still exists Nov 27 21:35:04.657: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 27 21:35:04.664: INFO: Pod pod-with-prestop-exec-hook still exists Nov 27 21:35:06.657: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 27 21:35:06.663: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:35:06.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-162" for this suite. Nov 27 21:35:28.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:35:28.882: INFO: namespace container-lifecycle-hook-162 deletion completed in 22.199964336s • [SLOW TEST:52.380 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:35:28.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:35:33.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2748" for this suite. Nov 27 21:36:19.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:36:19.192: INFO: namespace kubelet-test-2748 deletion completed in 46.165614289s • [SLOW TEST:50.304 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:36:19.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-d1dea185-49e8-4fcc-9e5b-9aac902a365f STEP: Creating a pod to test consume secrets Nov 27 21:36:19.319: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-28d0b19b-2f51-437f-8d1f-4db4dddaf683" in namespace "projected-3945" to be "success or failure" Nov 27 21:36:19.336: INFO: Pod "pod-projected-secrets-28d0b19b-2f51-437f-8d1f-4db4dddaf683": Phase="Pending", Reason="", readiness=false. Elapsed: 16.935416ms Nov 27 21:36:21.343: INFO: Pod "pod-projected-secrets-28d0b19b-2f51-437f-8d1f-4db4dddaf683": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023948086s Nov 27 21:36:23.350: INFO: Pod "pod-projected-secrets-28d0b19b-2f51-437f-8d1f-4db4dddaf683": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030979177s STEP: Saw pod success Nov 27 21:36:23.350: INFO: Pod "pod-projected-secrets-28d0b19b-2f51-437f-8d1f-4db4dddaf683" satisfied condition "success or failure" Nov 27 21:36:23.355: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-28d0b19b-2f51-437f-8d1f-4db4dddaf683 container secret-volume-test: STEP: delete the pod Nov 27 21:36:23.397: INFO: Waiting for pod pod-projected-secrets-28d0b19b-2f51-437f-8d1f-4db4dddaf683 to disappear Nov 27 21:36:23.405: INFO: Pod pod-projected-secrets-28d0b19b-2f51-437f-8d1f-4db4dddaf683 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:36:23.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3945" for this suite. Nov 27 21:36:29.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:36:29.603: INFO: namespace projected-3945 deletion completed in 6.190459403s • [SLOW TEST:10.410 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:36:29.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-77b7a016-34d7-4ea3-af1e-0dbbc843bdc1 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:36:35.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8904" for this suite. Nov 27 21:36:49.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:36:49.958: INFO: namespace configmap-8904 deletion completed in 14.170149s • [SLOW TEST:20.354 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:36:49.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Nov 27 21:36:50.126: INFO: Waiting up to 5m0s for pod "downward-api-ce9e83f5-87aa-48fa-bd36-e462d993dd8a" in namespace "downward-api-1694" to be "success or failure" Nov 27 21:36:50.159: INFO: Pod "downward-api-ce9e83f5-87aa-48fa-bd36-e462d993dd8a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.969435ms Nov 27 21:36:52.167: INFO: Pod "downward-api-ce9e83f5-87aa-48fa-bd36-e462d993dd8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040440333s Nov 27 21:36:54.173: INFO: Pod "downward-api-ce9e83f5-87aa-48fa-bd36-e462d993dd8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046328002s STEP: Saw pod success Nov 27 21:36:54.173: INFO: Pod "downward-api-ce9e83f5-87aa-48fa-bd36-e462d993dd8a" satisfied condition "success or failure" Nov 27 21:36:54.177: INFO: Trying to get logs from node iruya-worker pod downward-api-ce9e83f5-87aa-48fa-bd36-e462d993dd8a container dapi-container: STEP: delete the pod Nov 27 21:36:54.298: INFO: Waiting for pod downward-api-ce9e83f5-87aa-48fa-bd36-e462d993dd8a to disappear Nov 27 21:36:54.303: INFO: Pod downward-api-ce9e83f5-87aa-48fa-bd36-e462d993dd8a no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:36:54.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1694" for this suite. Nov 27 21:37:00.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:37:00.538: INFO: namespace downward-api-1694 deletion completed in 6.227399144s • [SLOW TEST:10.576 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:37:00.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Nov 27 21:37:00.639: INFO: Waiting up to 5m0s for pod "downward-api-f3a6adb7-119f-45eb-913d-aefdcb360fa5" in namespace "downward-api-1410" to be "success or failure" Nov 27 21:37:00.645: INFO: Pod "downward-api-f3a6adb7-119f-45eb-913d-aefdcb360fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339946ms Nov 27 21:37:02.652: INFO: Pod "downward-api-f3a6adb7-119f-45eb-913d-aefdcb360fa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012746s Nov 27 21:37:04.658: INFO: Pod "downward-api-f3a6adb7-119f-45eb-913d-aefdcb360fa5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019101811s STEP: Saw pod success Nov 27 21:37:04.658: INFO: Pod "downward-api-f3a6adb7-119f-45eb-913d-aefdcb360fa5" satisfied condition "success or failure" Nov 27 21:37:04.663: INFO: Trying to get logs from node iruya-worker2 pod downward-api-f3a6adb7-119f-45eb-913d-aefdcb360fa5 container dapi-container: STEP: delete the pod Nov 27 21:37:04.695: INFO: Waiting for pod downward-api-f3a6adb7-119f-45eb-913d-aefdcb360fa5 to disappear Nov 27 21:37:04.836: INFO: Pod downward-api-f3a6adb7-119f-45eb-913d-aefdcb360fa5 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:37:04.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1410" for this suite. Nov 27 21:37:10.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:37:11.189: INFO: namespace downward-api-1410 deletion completed in 6.342141139s • [SLOW TEST:10.649 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:37:11.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5462 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5462 STEP: Creating statefulset with conflicting port in namespace statefulset-5462 STEP: Waiting until pod test-pod will start running in namespace statefulset-5462 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5462 Nov 27 21:37:17.386: INFO: Observed stateful pod in namespace: statefulset-5462, name: ss-0, uid: 80548a66-27b2-4380-baf6-c83a0a6d7b0a, status phase: Pending. Waiting for statefulset controller to delete. Nov 27 21:37:17.760: INFO: Observed stateful pod in namespace: statefulset-5462, name: ss-0, uid: 80548a66-27b2-4380-baf6-c83a0a6d7b0a, status phase: Failed. Waiting for statefulset controller to delete. Nov 27 21:37:17.766: INFO: Observed stateful pod in namespace: statefulset-5462, name: ss-0, uid: 80548a66-27b2-4380-baf6-c83a0a6d7b0a, status phase: Failed. Waiting for statefulset controller to delete. Nov 27 21:37:17.784: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5462 STEP: Removing pod with conflicting port in namespace statefulset-5462 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5462 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Nov 27 21:37:21.868: INFO: Deleting all statefulset in ns statefulset-5462 Nov 27 21:37:21.873: INFO: Scaling statefulset ss to 0 Nov 27 21:37:41.898: INFO: Waiting for statefulset status.replicas updated to 0 Nov 27 21:37:41.904: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:37:41.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5462" for this suite. Nov 27 21:37:47.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:37:48.099: INFO: namespace statefulset-5462 deletion completed in 6.167498367s • [SLOW TEST:36.907 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:37:48.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-6224 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6224 to expose endpoints map[] Nov 27 21:37:48.250: INFO: successfully validated that service multi-endpoint-test in namespace services-6224 exposes endpoints map[] (36.439711ms elapsed) STEP: Creating pod pod1 in namespace services-6224 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6224 to expose endpoints map[pod1:[100]] Nov 27 21:37:51.343: INFO: successfully validated that service multi-endpoint-test in namespace services-6224 exposes endpoints map[pod1:[100]] (3.081118598s elapsed) STEP: Creating pod pod2 in namespace services-6224 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6224 to expose endpoints map[pod1:[100] pod2:[101]] Nov 27 21:37:55.600: INFO: successfully validated that service multi-endpoint-test in namespace services-6224 exposes endpoints map[pod1:[100] pod2:[101]] (4.250004339s elapsed) STEP: Deleting pod pod1 in namespace services-6224 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6224 to expose endpoints map[pod2:[101]] Nov 27 21:37:55.640: INFO: successfully validated that service multi-endpoint-test in namespace services-6224 exposes endpoints map[pod2:[101]] (33.951384ms elapsed) STEP: Deleting pod pod2 in namespace services-6224 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6224 to expose endpoints map[] Nov 27 21:37:55.651: INFO: successfully validated that service multi-endpoint-test in namespace services-6224 exposes endpoints map[] (6.182276ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:37:55.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6224" for this suite. Nov 27 21:38:17.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:38:17.897: INFO: namespace services-6224 deletion completed in 22.205137108s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:29.791 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:38:17.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Nov 27 21:38:17.973: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Nov 27 21:38:20.119: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Nov 27 21:38:22.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742109900, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742109900, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742109900, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742109900, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 27 21:38:24.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742109900, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742109900, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742109900, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742109900, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 27 21:38:27.000: INFO: Waited 635.094806ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:38:27.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-760" for this suite. Nov 27 21:38:33.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:38:34.038: INFO: namespace aggregator-760 deletion completed in 6.486186501s • [SLOW TEST:16.138 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:38:34.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 27 21:38:34.220: INFO: Waiting up to 5m0s for pod "pod-6ba4201c-e0a1-4798-ba26-1f5cd4f1952d" in namespace "emptydir-265" to be "success or failure" Nov 27 21:38:34.254: INFO: Pod "pod-6ba4201c-e0a1-4798-ba26-1f5cd4f1952d": Phase="Pending", Reason="", readiness=false. Elapsed: 33.214705ms Nov 27 21:38:36.291: INFO: Pod "pod-6ba4201c-e0a1-4798-ba26-1f5cd4f1952d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069904661s Nov 27 21:38:38.297: INFO: Pod "pod-6ba4201c-e0a1-4798-ba26-1f5cd4f1952d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076218586s STEP: Saw pod success Nov 27 21:38:38.297: INFO: Pod "pod-6ba4201c-e0a1-4798-ba26-1f5cd4f1952d" satisfied condition "success or failure" Nov 27 21:38:38.302: INFO: Trying to get logs from node iruya-worker pod pod-6ba4201c-e0a1-4798-ba26-1f5cd4f1952d container test-container: STEP: delete the pod Nov 27 21:38:38.459: INFO: Waiting for pod pod-6ba4201c-e0a1-4798-ba26-1f5cd4f1952d to disappear Nov 27 21:38:38.510: INFO: Pod pod-6ba4201c-e0a1-4798-ba26-1f5cd4f1952d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:38:38.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-265" for this suite. Nov 27 21:38:44.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:38:44.753: INFO: namespace emptydir-265 deletion completed in 6.231313677s • [SLOW TEST:10.711 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:38:44.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Nov 27 21:38:48.909: INFO: Pod pod-hostip-d64e02fa-5c24-467f-b4db-ca9bebdf0d6b has hostIP: 172.18.0.6 [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:38:48.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1190" for this suite. Nov 27 21:39:10.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:39:11.140: INFO: namespace pods-1190 deletion completed in 22.22400586s • [SLOW TEST:26.382 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:39:11.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 27 21:39:11.239: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abd39067-ca8f-4544-80d8-806284d6a388" in namespace "downward-api-8268" to be "success or failure" Nov 27 21:39:11.246: INFO: Pod "downwardapi-volume-abd39067-ca8f-4544-80d8-806284d6a388": Phase="Pending", Reason="", readiness=false. Elapsed: 7.497414ms Nov 27 21:39:13.304: INFO: Pod "downwardapi-volume-abd39067-ca8f-4544-80d8-806284d6a388": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065241654s Nov 27 21:39:15.311: INFO: Pod "downwardapi-volume-abd39067-ca8f-4544-80d8-806284d6a388": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071951718s STEP: Saw pod success Nov 27 21:39:15.311: INFO: Pod "downwardapi-volume-abd39067-ca8f-4544-80d8-806284d6a388" satisfied condition "success or failure" Nov 27 21:39:15.316: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-abd39067-ca8f-4544-80d8-806284d6a388 container client-container: STEP: delete the pod Nov 27 21:39:15.375: INFO: Waiting for pod downwardapi-volume-abd39067-ca8f-4544-80d8-806284d6a388 to disappear Nov 27 21:39:15.390: INFO: Pod downwardapi-volume-abd39067-ca8f-4544-80d8-806284d6a388 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:39:15.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8268" for this suite. Nov 27 21:39:21.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:39:21.605: INFO: namespace downward-api-8268 deletion completed in 6.204862453s • [SLOW TEST:10.464 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:39:21.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 27 21:39:21.675: INFO: Creating deployment "nginx-deployment" Nov 27 21:39:21.682: INFO: Waiting for observed generation 1 Nov 27 21:39:23.825: INFO: Waiting for all required pods to come up Nov 27 21:39:23.868: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Nov 27 21:39:33.884: INFO: Waiting for deployment "nginx-deployment" to complete Nov 27 21:39:33.894: INFO: Updating deployment "nginx-deployment" with a non-existent image Nov 27 21:39:33.904: INFO: Updating deployment nginx-deployment Nov 27 21:39:33.905: INFO: Waiting for observed generation 2 Nov 27 21:39:35.920: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Nov 27 21:39:35.925: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Nov 27 21:39:35.929: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Nov 27 21:39:35.944: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Nov 27 21:39:35.945: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Nov 27 21:39:35.948: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Nov 27 21:39:35.956: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Nov 27 21:39:35.957: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Nov 27 21:39:35.965: INFO: Updating deployment nginx-deployment Nov 27 21:39:35.965: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Nov 27 21:39:36.330: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Nov 27 21:39:38.627: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Nov 27 21:39:39.046: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2341,SelfLink:/apis/apps/v1/namespaces/deployment-2341/deployments/nginx-deployment,UID:be420127-7474-4fa6-b658-033a0888d094,ResourceVersion:11916668,Generation:3,CreationTimestamp:2020-11-27 21:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-11-27 21:39:36 +0000 UTC 2020-11-27 21:39:36 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-11-27 21:39:36 +0000 UTC 2020-11-27 21:39:21 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} Nov 27 21:39:39.453: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2341,SelfLink:/apis/apps/v1/namespaces/deployment-2341/replicasets/nginx-deployment-55fb7cb77f,UID:102f7f24-f72f-4bf0-9de3-9b12d5745e91,ResourceVersion:11916653,Generation:3,CreationTimestamp:2020-11-27 21:39:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment be420127-7474-4fa6-b658-033a0888d094 0x4000057d57 0x4000057d58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Nov 27 21:39:39.453: INFO: All old ReplicaSets of Deployment "nginx-deployment": Nov 27 21:39:39.454: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2341,SelfLink:/apis/apps/v1/namespaces/deployment-2341/replicasets/nginx-deployment-7b8c6f4498,UID:5a5b504e-33e8-48e1-8803-21cf7a6ee879,ResourceVersion:11916664,Generation:3,CreationTimestamp:2020-11-27 21:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment be420127-7474-4fa6-b658-033a0888d094 0x4000057f97 0x4000057f98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Nov 27 21:39:39.517: INFO: Pod "nginx-deployment-55fb7cb77f-5v4k9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5v4k9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-5v4k9,UID:75685177-da57-4918-b919-e92c0e7342d2,ResourceVersion:11916578,Generation:0,CreationTimestamp:2020-11-27 21:39:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400335f5d7 0x400335f5d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400335f650} {node.kubernetes.io/unreachable Exists NoExecute 0x400335f670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:33 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-27 21:39:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.518: INFO: Pod "nginx-deployment-55fb7cb77f-78f5h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-78f5h,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-78f5h,UID:798c9ddd-3862-4bfa-8fd9-8f6c11b5224a,ResourceVersion:11916684,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400335f747 0x400335f748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400335f7c0} {node.kubernetes.io/unreachable Exists NoExecute 0x400335f7e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.519: INFO: Pod "nginx-deployment-55fb7cb77f-7brdg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7brdg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-7brdg,UID:0480fa0c-c2e4-45fc-a428-e6ceab061aa2,ResourceVersion:11916681,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400335f8b7 0x400335f8b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400335f930} {node.kubernetes.io/unreachable Exists NoExecute 0x400335f950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.520: INFO: Pod "nginx-deployment-55fb7cb77f-b4w8w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b4w8w,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-b4w8w,UID:afa5bf28-2ac8-4d47-9e13-7a279f51a521,ResourceVersion:11916659,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400335fa27 0x400335fa28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400335faa0} {node.kubernetes.io/unreachable Exists NoExecute 0x400335fac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.522: INFO: Pod "nginx-deployment-55fb7cb77f-fgbr4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fgbr4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-fgbr4,UID:6abc270b-84dd-4696-a198-c8220cf0cbfe,ResourceVersion:11916586,Generation:0,CreationTimestamp:2020-11-27 21:39:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400335fb97 0x400335fb98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400335fc10} {node.kubernetes.io/unreachable Exists NoExecute 0x400335fc30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.523: INFO: Pod "nginx-deployment-55fb7cb77f-jkz9t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jkz9t,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-jkz9t,UID:16367b75-c9bc-4d69-b8eb-c5727a02edb9,ResourceVersion:11916706,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400335fd07 0x400335fd08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400335fd80} {node.kubernetes.io/unreachable Exists NoExecute 0x400335fda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.524: INFO: Pod "nginx-deployment-55fb7cb77f-knxqt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-knxqt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-knxqt,UID:0ca2a7d4-f9b8-4842-9f25-9b1342d8f3bb,ResourceVersion:11916676,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400335fe77 0x400335fe78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400335fef0} {node.kubernetes.io/unreachable Exists NoExecute 0x400335ff10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.525: INFO: Pod "nginx-deployment-55fb7cb77f-s56hl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-s56hl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-s56hl,UID:4b1c94c4-b8e9-460f-a447-37789f6f2b70,ResourceVersion:11916570,Generation:0,CreationTimestamp:2020-11-27 21:39:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400335ffe7 0x400335ffe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267c060} {node.kubernetes.io/unreachable Exists NoExecute 0x400267c080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:33 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.526: INFO: Pod "nginx-deployment-55fb7cb77f-s6hd4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-s6hd4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-s6hd4,UID:5e8b23b4-4102-469a-ac20-b9aa6edca530,ResourceVersion:11916724,Generation:0,CreationTimestamp:2020-11-27 21:39:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400267c167 0x400267c168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267c1e0} {node.kubernetes.io/unreachable Exists NoExecute 0x400267c200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:33 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.13,StartTime:2020-11-27 21:39:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.527: INFO: Pod "nginx-deployment-55fb7cb77f-vrl45" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vrl45,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-vrl45,UID:8a2a7abb-2c08-497d-8567-4bee7ad2d4a6,ResourceVersion:11916686,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400267c317 0x400267c318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267c390} {node.kubernetes.io/unreachable Exists NoExecute 0x400267c3b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.529: INFO: Pod "nginx-deployment-55fb7cb77f-wnqvf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wnqvf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-wnqvf,UID:7188016e-1f7b-4a6c-aa1b-14f29a890d97,ResourceVersion:11916717,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400267c487 0x400267c488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267c530} {node.kubernetes.io/unreachable Exists NoExecute 0x400267c550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.530: INFO: Pod "nginx-deployment-55fb7cb77f-wrkqc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wrkqc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-wrkqc,UID:ad27591b-ed9b-4e5e-a9e2-31085b9897b6,ResourceVersion:11916716,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400267c687 0x400267c688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267c700} {node.kubernetes.io/unreachable Exists NoExecute 0x400267c720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.531: INFO: Pod "nginx-deployment-55fb7cb77f-x77mh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-x77mh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-55fb7cb77f-x77mh,UID:003acdda-cc12-48eb-9225-db9626fbf234,ResourceVersion:11916589,Generation:0,CreationTimestamp:2020-11-27 21:39:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 102f7f24-f72f-4bf0-9de3-9b12d5745e91 0x400267c7f7 0x400267c7f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267c870} {node.kubernetes.io/unreachable Exists NoExecute 0x400267c8a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-27 21:39:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.532: INFO: Pod "nginx-deployment-7b8c6f4498-4jznr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4jznr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-4jznr,UID:c902b25d-85e9-45ce-aad2-40fdc763046b,ResourceVersion:11916713,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267c987 0x400267c988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267ca10} {node.kubernetes.io/unreachable Exists NoExecute 0x400267ca30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.533: INFO: Pod "nginx-deployment-7b8c6f4498-55rr7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-55rr7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-55rr7,UID:3e57d69b-f54d-4c40-a734-518c8cc73e2b,ResourceVersion:11916729,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267caf7 0x400267caf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267cb70} {node.kubernetes.io/unreachable Exists NoExecute 0x400267cb90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.534: INFO: Pod "nginx-deployment-7b8c6f4498-65cfs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-65cfs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-65cfs,UID:e2fc4e6f-3bec-4ad4-9fd2-8ac4354e5f37,ResourceVersion:11916657,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267cc57 0x400267cc58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267ccd0} {node.kubernetes.io/unreachable Exists NoExecute 0x400267ccf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.535: INFO: Pod "nginx-deployment-7b8c6f4498-6tfgf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6tfgf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-6tfgf,UID:cf1e95b2-f192-425f-b816-9ac16e8f7f9c,ResourceVersion:11916521,Generation:0,CreationTimestamp:2020-11-27 21:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267cdb7 0x400267cdb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267ce30} {node.kubernetes.io/unreachable Exists NoExecute 0x400267ce50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.194,StartTime:2020-11-27 21:39:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-27 21:39:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://02fe67f0796f31b7b10caddd0737e1bbb761048b383ea642776cfcf01474456b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.536: INFO: Pod "nginx-deployment-7b8c6f4498-94g7t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-94g7t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-94g7t,UID:dd3efc08-8530-4516-87e4-2f77c4965fd4,ResourceVersion:11916494,Generation:0,CreationTimestamp:2020-11-27 21:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267cf27 0x400267cf28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267cfa0} {node.kubernetes.io/unreachable Exists NoExecute 0x400267cfc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.193,StartTime:2020-11-27 21:39:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-27 21:39:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://485b6f4af48c68f9353e7666e546dee73ee6014875775e1278389552bdcca25e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.537: INFO: Pod "nginx-deployment-7b8c6f4498-bt7qq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bt7qq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-bt7qq,UID:6780a1c7-48f7-41bb-858e-2c3bd16623ba,ResourceVersion:11916480,Generation:0,CreationTimestamp:2020-11-27 21:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267d097 0x400267d098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267d110} {node.kubernetes.io/unreachable Exists NoExecute 0x400267d130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.9,StartTime:2020-11-27 21:39:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-27 21:39:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://36d4ff8341bb32d0e10cd5fe2abec909250018749132572ef54714844d61a749}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.538: INFO: Pod "nginx-deployment-7b8c6f4498-cd7sj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cd7sj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-cd7sj,UID:d64b4d92-b32c-4087-8c0f-c3fd047c2ca5,ResourceVersion:11916514,Generation:0,CreationTimestamp:2020-11-27 21:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267d207 0x400267d208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267d280} {node.kubernetes.io/unreachable Exists NoExecute 0x400267d2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.11,StartTime:2020-11-27 21:39:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-27 21:39:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2da701cb70a850a3996a2c6eb34b571c650b0337feb6666a49bd69327a3014f9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.539: INFO: Pod "nginx-deployment-7b8c6f4498-fj797" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fj797,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-fj797,UID:e7116bfe-fe1c-43b0-a118-472587cb4628,ResourceVersion:11916692,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267d377 0x400267d378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267d4c0} {node.kubernetes.io/unreachable Exists NoExecute 0x400267d4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.539: INFO: Pod "nginx-deployment-7b8c6f4498-gt87v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gt87v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-gt87v,UID:43c14229-c3d6-40f2-834b-c3646f90663c,ResourceVersion:11916483,Generation:0,CreationTimestamp:2020-11-27 21:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267d737 0x400267d738}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267d7c0} {node.kubernetes.io/unreachable Exists NoExecute 0x400267d7e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.8,StartTime:2020-11-27 21:39:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-27 21:39:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c95266fa7f4cce4d27b37aca29941f19a6080f67c653cdc8ab0d885e5fd060d9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.540: INFO: Pod "nginx-deployment-7b8c6f4498-h46rt" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h46rt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-h46rt,UID:c85ad5e9-68e5-4163-8f49-6ea88032656f,ResourceVersion:11916501,Generation:0,CreationTimestamp:2020-11-27 21:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267d8b7 0x400267d8b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267d930} {node.kubernetes.io/unreachable Exists NoExecute 0x400267d950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.10,StartTime:2020-11-27 21:39:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-27 21:39:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9cbd36fbccdf998ea526eae005220ad6f0651e161f636de2f34f737a9196e376}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.541: INFO: Pod "nginx-deployment-7b8c6f4498-l4t6k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l4t6k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-l4t6k,UID:90340afa-0834-44b0-8c79-181f500046b6,ResourceVersion:11916665,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267da27 0x400267da28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267daa0} {node.kubernetes.io/unreachable Exists NoExecute 0x400267dac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.542: INFO: Pod "nginx-deployment-7b8c6f4498-mqp64" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mqp64,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-mqp64,UID:b5e52663-ad2d-4bc5-a8bd-8c93bd52ee69,ResourceVersion:11916679,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267db87 0x400267db88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267dc00} {node.kubernetes.io/unreachable Exists NoExecute 0x400267dc20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.543: INFO: Pod "nginx-deployment-7b8c6f4498-mrrsb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mrrsb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-mrrsb,UID:e3814734-aad3-45d1-97fb-6944233ba91d,ResourceVersion:11916700,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267dce7 0x400267dce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267dd60} {node.kubernetes.io/unreachable Exists NoExecute 0x400267dd80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.544: INFO: Pod "nginx-deployment-7b8c6f4498-qbqgc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qbqgc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-qbqgc,UID:5d94880c-5c9a-45a0-837d-4cf43da92da4,ResourceVersion:11916644,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267de47 0x400267de48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x400267dec0} {node.kubernetes.io/unreachable Exists NoExecute 0x400267dee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.545: INFO: Pod "nginx-deployment-7b8c6f4498-rhkpt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rhkpt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-rhkpt,UID:5f916b93-610f-400a-96b3-bfc56376cd3f,ResourceVersion:11916697,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x400267dfa7 0x400267dfa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40035dc020} {node.kubernetes.io/unreachable Exists NoExecute 0x40035dc040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.546: INFO: Pod "nginx-deployment-7b8c6f4498-v4zgf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v4zgf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-v4zgf,UID:73f6692e-ea02-4245-9910-f6b7d5bdb1c2,ResourceVersion:11916723,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x40035dc107 0x40035dc108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40035dc180} {node.kubernetes.io/unreachable Exists NoExecute 0x40035dc1a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.547: INFO: Pod "nginx-deployment-7b8c6f4498-vj7tm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vj7tm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-vj7tm,UID:0228d5f6-ccc8-45f2-b390-8a564d8e099c,ResourceVersion:11916690,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x40035dc267 0x40035dc268}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40035dc2e0} {node.kubernetes.io/unreachable Exists NoExecute 0x40035dc300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.548: INFO: Pod "nginx-deployment-7b8c6f4498-w6nz2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w6nz2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-w6nz2,UID:47c7e54d-72f7-4078-9b97-59f1885b9eef,ResourceVersion:11916507,Generation:0,CreationTimestamp:2020-11-27 21:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x40035dc3c7 0x40035dc3c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40035dc440} {node.kubernetes.io/unreachable Exists NoExecute 0x40035dc460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.192,StartTime:2020-11-27 21:39:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-27 21:39:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a761dd8a4e48f6d01461b9bba13e828ca95942db3e3410f99dc184f6736b3311}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.549: INFO: Pod "nginx-deployment-7b8c6f4498-x7gcl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x7gcl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-x7gcl,UID:8e076b5f-8fd8-432b-be73-a0f35496bbc7,ResourceVersion:11916527,Generation:0,CreationTimestamp:2020-11-27 21:39:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x40035dc537 0x40035dc538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40035dc5b0} {node.kubernetes.io/unreachable Exists NoExecute 0x40035dc5d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.12,StartTime:2020-11-27 21:39:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-27 21:39:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://23df92feab5aa69190e846ca43a3a9bf47e45aceee374455602fbb06a89cbc18}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Nov 27 21:39:39.550: INFO: Pod "nginx-deployment-7b8c6f4498-xppk7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xppk7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2341,SelfLink:/api/v1/namespaces/deployment-2341/pods/nginx-deployment-7b8c6f4498-xppk7,UID:811e1de5-2e10-43c1-bb03-c37f195253b7,ResourceVersion:11916669,Generation:0,CreationTimestamp:2020-11-27 21:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5a5b504e-33e8-48e1-8803-21cf7a6ee879 0x40035dc6a7 0x40035dc6a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2mw7r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2mw7r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2mw7r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x40035dc720} {node.kubernetes.io/unreachable Exists NoExecute 0x40035dc740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:39:36 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-27 21:39:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:39:39.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2341" for this suite. Nov 27 21:39:56.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:39:56.376: INFO: namespace deployment-2341 deletion completed in 16.621371091s • [SLOW TEST:34.770 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:39:56.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-df800dca-6bbd-45cf-a300-06ef75c8c0f8 STEP: Creating configMap with name cm-test-opt-upd-50e645f3-4fbe-47bb-84aa-64c310d0ff54 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-df800dca-6bbd-45cf-a300-06ef75c8c0f8 STEP: Updating configmap cm-test-opt-upd-50e645f3-4fbe-47bb-84aa-64c310d0ff54 STEP: Creating configMap with name cm-test-opt-create-fc198b60-7174-40d9-a84e-b958693faf8a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:40:08.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-719" for this suite. Nov 27 21:40:30.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:40:31.105: INFO: namespace configmap-719 deletion completed in 22.177580574s • [SLOW TEST:34.728 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:40:31.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-v669 STEP: Creating a pod to test atomic-volume-subpath Nov 27 21:40:31.551: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-v669" in namespace "subpath-2938" to be "success or failure" Nov 27 21:40:31.567: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Pending", Reason="", readiness=false. Elapsed: 15.969909ms Nov 27 21:40:33.573: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022242098s Nov 27 21:40:35.581: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Running", Reason="", readiness=true. Elapsed: 4.029814926s Nov 27 21:40:37.587: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Running", Reason="", readiness=true. Elapsed: 6.035860487s Nov 27 21:40:39.593: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Running", Reason="", readiness=true. Elapsed: 8.042090889s Nov 27 21:40:41.599: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Running", Reason="", readiness=true. Elapsed: 10.048083178s Nov 27 21:40:43.606: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Running", Reason="", readiness=true. Elapsed: 12.054746449s Nov 27 21:40:45.612: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Running", Reason="", readiness=true. Elapsed: 14.061539736s Nov 27 21:40:47.640: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Running", Reason="", readiness=true. Elapsed: 16.08883681s Nov 27 21:40:49.647: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Running", Reason="", readiness=true. Elapsed: 18.096582464s Nov 27 21:40:51.655: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Running", Reason="", readiness=true. Elapsed: 20.104136277s Nov 27 21:40:53.662: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Running", Reason="", readiness=true. Elapsed: 22.111338864s Nov 27 21:40:55.676: INFO: Pod "pod-subpath-test-downwardapi-v669": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.124815227s STEP: Saw pod success Nov 27 21:40:55.676: INFO: Pod "pod-subpath-test-downwardapi-v669" satisfied condition "success or failure" Nov 27 21:40:55.680: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-v669 container test-container-subpath-downwardapi-v669: STEP: delete the pod Nov 27 21:40:55.726: INFO: Waiting for pod pod-subpath-test-downwardapi-v669 to disappear Nov 27 21:40:55.741: INFO: Pod pod-subpath-test-downwardapi-v669 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-v669 Nov 27 21:40:55.741: INFO: Deleting pod "pod-subpath-test-downwardapi-v669" in namespace "subpath-2938" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:40:55.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2938" for this suite. Nov 27 21:41:01.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:41:01.951: INFO: namespace subpath-2938 deletion completed in 6.163529127s • [SLOW TEST:30.843 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:41:01.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Nov 27 21:41:02.099: INFO: Waiting up to 5m0s for pod "downward-api-bb569256-4a72-41e0-a379-6b420d8e26cb" in namespace "downward-api-8790" to be "success or failure" Nov 27 21:41:02.107: INFO: Pod "downward-api-bb569256-4a72-41e0-a379-6b420d8e26cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226372ms Nov 27 21:41:04.277: INFO: Pod "downward-api-bb569256-4a72-41e0-a379-6b420d8e26cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178484035s Nov 27 21:41:06.284: INFO: Pod "downward-api-bb569256-4a72-41e0-a379-6b420d8e26cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.18490445s STEP: Saw pod success Nov 27 21:41:06.284: INFO: Pod "downward-api-bb569256-4a72-41e0-a379-6b420d8e26cb" satisfied condition "success or failure" Nov 27 21:41:06.289: INFO: Trying to get logs from node iruya-worker2 pod downward-api-bb569256-4a72-41e0-a379-6b420d8e26cb container dapi-container: STEP: delete the pod Nov 27 21:41:06.397: INFO: Waiting for pod downward-api-bb569256-4a72-41e0-a379-6b420d8e26cb to disappear Nov 27 21:41:06.449: INFO: Pod downward-api-bb569256-4a72-41e0-a379-6b420d8e26cb no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:41:06.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8790" for this suite. Nov 27 21:41:12.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:41:12.694: INFO: namespace downward-api-8790 deletion completed in 6.238571635s • [SLOW TEST:10.741 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:41:12.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Nov 27 21:41:13.508: INFO: Pod name wrapped-volume-race-0418c336-28d9-4383-a067-34edb4223536: Found 0 pods out of 5 Nov 27 21:41:18.528: INFO: Pod name wrapped-volume-race-0418c336-28d9-4383-a067-34edb4223536: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0418c336-28d9-4383-a067-34edb4223536 in namespace emptydir-wrapper-1824, will wait for the garbage collector to delete the pods Nov 27 21:41:32.679: INFO: Deleting ReplicationController wrapped-volume-race-0418c336-28d9-4383-a067-34edb4223536 took: 43.341755ms Nov 27 21:41:32.980: INFO: Terminating ReplicationController wrapped-volume-race-0418c336-28d9-4383-a067-34edb4223536 pods took: 300.652247ms STEP: Creating RC which spawns configmap-volume pods Nov 27 21:42:15.745: INFO: Pod name wrapped-volume-race-a643089c-e73e-4824-82a7-020348bc7184: Found 0 pods out of 5 Nov 27 21:42:20.763: INFO: Pod name wrapped-volume-race-a643089c-e73e-4824-82a7-020348bc7184: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a643089c-e73e-4824-82a7-020348bc7184 in namespace emptydir-wrapper-1824, will wait for the garbage collector to delete the pods Nov 27 21:42:34.886: INFO: Deleting ReplicationController wrapped-volume-race-a643089c-e73e-4824-82a7-020348bc7184 took: 8.932204ms Nov 27 21:42:35.187: INFO: Terminating ReplicationController wrapped-volume-race-a643089c-e73e-4824-82a7-020348bc7184 pods took: 300.673183ms STEP: Creating RC which spawns configmap-volume pods Nov 27 21:43:15.652: INFO: Pod name wrapped-volume-race-52023e6b-1f6c-4864-a14d-de5dbdb44d54: Found 0 pods out of 5 Nov 27 21:43:20.671: INFO: Pod name wrapped-volume-race-52023e6b-1f6c-4864-a14d-de5dbdb44d54: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-52023e6b-1f6c-4864-a14d-de5dbdb44d54 in namespace emptydir-wrapper-1824, will wait for the garbage collector to delete the pods Nov 27 21:43:34.792: INFO: Deleting ReplicationController wrapped-volume-race-52023e6b-1f6c-4864-a14d-de5dbdb44d54 took: 19.554799ms Nov 27 21:43:35.092: INFO: Terminating ReplicationController wrapped-volume-race-52023e6b-1f6c-4864-a14d-de5dbdb44d54 pods took: 300.651969ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:44:16.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1824" for this suite. Nov 27 21:44:24.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:44:24.418: INFO: namespace emptydir-wrapper-1824 deletion completed in 8.176687765s • [SLOW TEST:191.723 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:44:24.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:45:24.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2235" for this suite. Nov 27 21:45:46.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:45:46.725: INFO: namespace container-probe-2235 deletion completed in 22.189551537s • [SLOW TEST:82.306 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:45:46.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a62b16e8-c8f4-4b1e-ac3e-24de41a72f9a STEP: Creating a pod to test consume configMaps Nov 27 21:45:46.881: INFO: Waiting up to 5m0s for pod "pod-configmaps-da6ff204-b2cd-450e-91d4-d10369cf4d2e" in namespace "configmap-643" to be "success or failure" Nov 27 21:45:46.892: INFO: Pod "pod-configmaps-da6ff204-b2cd-450e-91d4-d10369cf4d2e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.29834ms Nov 27 21:45:48.898: INFO: Pod "pod-configmaps-da6ff204-b2cd-450e-91d4-d10369cf4d2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017350051s Nov 27 21:45:50.905: INFO: Pod "pod-configmaps-da6ff204-b2cd-450e-91d4-d10369cf4d2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024130934s STEP: Saw pod success Nov 27 21:45:50.905: INFO: Pod "pod-configmaps-da6ff204-b2cd-450e-91d4-d10369cf4d2e" satisfied condition "success or failure" Nov 27 21:45:50.909: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-da6ff204-b2cd-450e-91d4-d10369cf4d2e container configmap-volume-test: STEP: delete the pod Nov 27 21:45:50.958: INFO: Waiting for pod pod-configmaps-da6ff204-b2cd-450e-91d4-d10369cf4d2e to disappear Nov 27 21:45:51.001: INFO: Pod pod-configmaps-da6ff204-b2cd-450e-91d4-d10369cf4d2e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:45:51.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-643" for this suite. Nov 27 21:45:57.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:45:57.197: INFO: namespace configmap-643 deletion completed in 6.187266515s • [SLOW TEST:10.472 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:45:57.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 27 21:45:57.344: INFO: Create a RollingUpdate DaemonSet Nov 27 21:45:57.350: INFO: Check that daemon pods launch on every node of the cluster Nov 27 21:45:57.420: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:45:57.436: INFO: Number of nodes with available pods: 0 Nov 27 21:45:57.436: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:45:58.446: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:45:58.452: INFO: Number of nodes with available pods: 0 Nov 27 21:45:58.452: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:45:59.478: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:45:59.538: INFO: Number of nodes with available pods: 0 Nov 27 21:45:59.538: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:46:00.565: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:00.570: INFO: Number of nodes with available pods: 0 Nov 27 21:46:00.570: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:46:01.448: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:01.455: INFO: Number of nodes with available pods: 1 Nov 27 21:46:01.455: INFO: Node iruya-worker is running more than one daemon pod Nov 27 21:46:02.449: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:02.455: INFO: Number of nodes with available pods: 2 Nov 27 21:46:02.455: INFO: Number of running nodes: 2, number of available pods: 2 Nov 27 21:46:02.456: INFO: Update the DaemonSet to trigger a rollout Nov 27 21:46:02.467: INFO: Updating DaemonSet daemon-set Nov 27 21:46:15.502: INFO: Roll back the DaemonSet before rollout is complete Nov 27 21:46:15.512: INFO: Updating DaemonSet daemon-set Nov 27 21:46:15.513: INFO: Make sure DaemonSet rollback is complete Nov 27 21:46:15.538: INFO: Wrong image for pod: daemon-set-qsxsf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 27 21:46:15.539: INFO: Pod daemon-set-qsxsf is not available Nov 27 21:46:15.567: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:16.575: INFO: Wrong image for pod: daemon-set-qsxsf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 27 21:46:16.575: INFO: Pod daemon-set-qsxsf is not available Nov 27 21:46:16.582: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:17.621: INFO: Wrong image for pod: daemon-set-qsxsf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 27 21:46:17.621: INFO: Pod daemon-set-qsxsf is not available Nov 27 21:46:17.630: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:18.575: INFO: Wrong image for pod: daemon-set-qsxsf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 27 21:46:18.576: INFO: Pod daemon-set-qsxsf is not available Nov 27 21:46:18.586: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:19.575: INFO: Wrong image for pod: daemon-set-qsxsf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 27 21:46:19.575: INFO: Pod daemon-set-qsxsf is not available Nov 27 21:46:19.588: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:20.574: INFO: Wrong image for pod: daemon-set-qsxsf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 27 21:46:20.574: INFO: Pod daemon-set-qsxsf is not available Nov 27 21:46:20.581: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:21.575: INFO: Wrong image for pod: daemon-set-qsxsf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 27 21:46:21.575: INFO: Pod daemon-set-qsxsf is not available Nov 27 21:46:21.583: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:22.575: INFO: Wrong image for pod: daemon-set-qsxsf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 27 21:46:22.575: INFO: Pod daemon-set-qsxsf is not available Nov 27 21:46:22.586: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:23.576: INFO: Wrong image for pod: daemon-set-qsxsf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 27 21:46:23.576: INFO: Pod daemon-set-qsxsf is not available Nov 27 21:46:23.586: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:24.573: INFO: Wrong image for pod: daemon-set-qsxsf. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 27 21:46:24.574: INFO: Pod daemon-set-qsxsf is not available Nov 27 21:46:24.581: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 27 21:46:25.584: INFO: Pod daemon-set-2snvb is not available Nov 27 21:46:25.594: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1780, will wait for the garbage collector to delete the pods Nov 27 21:46:25.673: INFO: Deleting DaemonSet.extensions daemon-set took: 8.347187ms Nov 27 21:46:25.974: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.608549ms Nov 27 21:46:28.486: INFO: Number of nodes with available pods: 0 Nov 27 21:46:28.486: INFO: Number of running nodes: 0, number of available pods: 0 Nov 27 21:46:28.503: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1780/daemonsets","resourceVersion":"11918827"},"items":null} Nov 27 21:46:28.506: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1780/pods","resourceVersion":"11918827"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:46:28.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1780" for this suite. Nov 27 21:46:34.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:46:34.731: INFO: namespace daemonsets-1780 deletion completed in 6.198625035s • [SLOW TEST:37.529 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:46:34.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9288.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9288.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 27 21:46:40.969: INFO: DNS probes using dns-9288/dns-test-172aa261-9f74-47ed-81e7-f209af8fbe9e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:46:41.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9288" for this suite. Nov 27 21:46:47.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:46:47.249: INFO: namespace dns-9288 deletion completed in 6.227642991s • [SLOW TEST:12.516 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:46:47.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4971 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 27 21:46:47.357: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Nov 27 21:47:11.549: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.214 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4971 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:47:11.549: INFO: >>> kubeConfig: /root/.kube/config I1127 21:47:11.621340 7 log.go:172] (0x400099c8f0) (0x40029bfcc0) Create stream I1127 21:47:11.621610 7 log.go:172] (0x400099c8f0) (0x40029bfcc0) Stream added, broadcasting: 1 I1127 21:47:11.625719 7 log.go:172] (0x400099c8f0) Reply frame received for 1 I1127 21:47:11.625927 7 log.go:172] (0x400099c8f0) (0x40026e37c0) Create stream I1127 21:47:11.626034 7 log.go:172] (0x400099c8f0) (0x40026e37c0) Stream added, broadcasting: 3 I1127 21:47:11.627707 7 log.go:172] (0x400099c8f0) Reply frame received for 3 I1127 21:47:11.627848 7 log.go:172] (0x400099c8f0) (0x40029bfd60) Create stream I1127 21:47:11.627923 7 log.go:172] (0x400099c8f0) (0x40029bfd60) Stream added, broadcasting: 5 I1127 21:47:11.629513 7 log.go:172] (0x400099c8f0) Reply frame received for 5 I1127 21:47:12.765114 7 log.go:172] (0x400099c8f0) Data frame received for 5 I1127 21:47:12.765287 7 log.go:172] (0x40029bfd60) (5) Data frame handling I1127 21:47:12.765447 7 log.go:172] (0x400099c8f0) Data frame received for 3 I1127 21:47:12.765600 7 log.go:172] (0x40026e37c0) (3) Data frame handling I1127 21:47:12.765752 7 log.go:172] (0x40026e37c0) (3) Data frame sent I1127 21:47:12.765873 7 log.go:172] (0x400099c8f0) Data frame received for 3 I1127 21:47:12.765971 7 log.go:172] (0x40026e37c0) (3) Data frame handling I1127 21:47:12.766876 7 log.go:172] (0x400099c8f0) Data frame received for 1 I1127 21:47:12.767120 7 log.go:172] (0x40029bfcc0) (1) Data frame handling I1127 21:47:12.767300 7 log.go:172] (0x40029bfcc0) (1) Data frame sent I1127 21:47:12.767445 7 log.go:172] (0x400099c8f0) (0x40029bfcc0) Stream removed, broadcasting: 1 I1127 21:47:12.767623 7 log.go:172] (0x400099c8f0) Go away received I1127 21:47:12.768017 7 log.go:172] (0x400099c8f0) (0x40029bfcc0) Stream removed, broadcasting: 1 I1127 21:47:12.768176 7 log.go:172] (0x400099c8f0) (0x40026e37c0) Stream removed, broadcasting: 3 I1127 21:47:12.768274 7 log.go:172] (0x400099c8f0) (0x40029bfd60) Stream removed, broadcasting: 5 Nov 27 21:47:12.768: INFO: Found all expected endpoints: [netserver-0] Nov 27 21:47:12.773: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.45 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4971 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 21:47:12.773: INFO: >>> kubeConfig: /root/.kube/config I1127 21:47:12.835779 7 log.go:172] (0x4000a588f0) (0x4003033d60) Create stream I1127 21:47:12.835956 7 log.go:172] (0x4000a588f0) (0x4003033d60) Stream added, broadcasting: 1 I1127 21:47:12.839290 7 log.go:172] (0x4000a588f0) Reply frame received for 1 I1127 21:47:12.839468 7 log.go:172] (0x4000a588f0) (0x40026e3860) Create stream I1127 21:47:12.839549 7 log.go:172] (0x4000a588f0) (0x40026e3860) Stream added, broadcasting: 3 I1127 21:47:12.841766 7 log.go:172] (0x4000a588f0) Reply frame received for 3 I1127 21:47:12.841923 7 log.go:172] (0x4000a588f0) (0x40026e3900) Create stream I1127 21:47:12.842054 7 log.go:172] (0x4000a588f0) (0x40026e3900) Stream added, broadcasting: 5 I1127 21:47:12.843686 7 log.go:172] (0x4000a588f0) Reply frame received for 5 I1127 21:47:13.919206 7 log.go:172] (0x4000a588f0) Data frame received for 3 I1127 21:47:13.919405 7 log.go:172] (0x40026e3860) (3) Data frame handling I1127 21:47:13.919544 7 log.go:172] (0x4000a588f0) Data frame received for 5 I1127 21:47:13.919703 7 log.go:172] (0x40026e3900) (5) Data frame handling I1127 21:47:13.919833 7 log.go:172] (0x40026e3860) (3) Data frame sent I1127 21:47:13.919987 7 log.go:172] (0x4000a588f0) Data frame received for 3 I1127 21:47:13.920108 7 log.go:172] (0x40026e3860) (3) Data frame handling I1127 21:47:13.921240 7 log.go:172] (0x4000a588f0) Data frame received for 1 I1127 21:47:13.921437 7 log.go:172] (0x4003033d60) (1) Data frame handling I1127 21:47:13.921604 7 log.go:172] (0x4003033d60) (1) Data frame sent I1127 21:47:13.921737 7 log.go:172] (0x4000a588f0) (0x4003033d60) Stream removed, broadcasting: 1 I1127 21:47:13.921902 7 log.go:172] (0x4000a588f0) Go away received I1127 21:47:13.922187 7 log.go:172] (0x4000a588f0) (0x4003033d60) Stream removed, broadcasting: 1 I1127 21:47:13.922296 7 log.go:172] (0x4000a588f0) (0x40026e3860) Stream removed, broadcasting: 3 I1127 21:47:13.922381 7 log.go:172] (0x4000a588f0) (0x40026e3900) Stream removed, broadcasting: 5 Nov 27 21:47:13.922: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:47:13.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4971" for this suite. Nov 27 21:47:35.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:47:36.174: INFO: namespace pod-network-test-4971 deletion completed in 22.241023888s • [SLOW TEST:48.924 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:47:36.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-f25c563e-d3d6-4169-ad51-d40375dddc35 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-f25c563e-d3d6-4169-ad51-d40375dddc35 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:47:44.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1817" for this suite. Nov 27 21:48:06.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:48:06.542: INFO: namespace configmap-1817 deletion completed in 22.189466307s • [SLOW TEST:30.366 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:48:06.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-5094 I1127 21:48:06.641564 7 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5094, replica count: 1 I1127 21:48:07.695105 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1127 21:48:08.696591 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1127 21:48:09.697994 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1127 21:48:10.699161 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 27 21:48:10.867: INFO: Created: latency-svc-4h6m8 Nov 27 21:48:10.872: INFO: Got endpoints: latency-svc-4h6m8 [71.390059ms] Nov 27 21:48:10.909: INFO: Created: latency-svc-kd5k9 Nov 27 21:48:10.932: INFO: Got endpoints: latency-svc-kd5k9 [58.263708ms] Nov 27 21:48:11.010: INFO: Created: latency-svc-jzd2c Nov 27 21:48:11.024: INFO: Got endpoints: latency-svc-jzd2c [150.809347ms] Nov 27 21:48:11.061: INFO: Created: latency-svc-ws6xg Nov 27 21:48:11.077: INFO: Got endpoints: latency-svc-ws6xg [201.924381ms] Nov 27 21:48:11.102: INFO: Created: latency-svc-bwb5t Nov 27 21:48:11.179: INFO: Got endpoints: latency-svc-bwb5t [304.123082ms] Nov 27 21:48:11.181: INFO: Created: latency-svc-8bn9k Nov 27 21:48:11.202: INFO: Got endpoints: latency-svc-8bn9k [327.298216ms] Nov 27 21:48:11.253: INFO: Created: latency-svc-xvq9s Nov 27 21:48:11.262: INFO: Got endpoints: latency-svc-xvq9s [387.731295ms] Nov 27 21:48:11.349: INFO: Created: latency-svc-l4c8b Nov 27 21:48:11.354: INFO: Got endpoints: latency-svc-l4c8b [480.311884ms] Nov 27 21:48:11.382: INFO: Created: latency-svc-252d9 Nov 27 21:48:11.396: INFO: Got endpoints: latency-svc-252d9 [522.617014ms] Nov 27 21:48:11.478: INFO: Created: latency-svc-dkwj7 Nov 27 21:48:11.483: INFO: Got endpoints: latency-svc-dkwj7 [608.947627ms] Nov 27 21:48:11.522: INFO: Created: latency-svc-6nvb7 Nov 27 21:48:11.535: INFO: Got endpoints: latency-svc-6nvb7 [661.64596ms] Nov 27 21:48:11.657: INFO: Created: latency-svc-crk8l Nov 27 21:48:11.667: INFO: Got endpoints: latency-svc-crk8l [792.372473ms] Nov 27 21:48:11.701: INFO: Created: latency-svc-q5x45 Nov 27 21:48:11.715: INFO: Got endpoints: latency-svc-q5x45 [839.694757ms] Nov 27 21:48:11.742: INFO: Created: latency-svc-lh8v7 Nov 27 21:48:11.782: INFO: Got endpoints: latency-svc-lh8v7 [908.03294ms] Nov 27 21:48:11.796: INFO: Created: latency-svc-ntdbv Nov 27 21:48:11.811: INFO: Got endpoints: latency-svc-ntdbv [937.912418ms] Nov 27 21:48:11.840: INFO: Created: latency-svc-brwgm Nov 27 21:48:11.926: INFO: Got endpoints: latency-svc-brwgm [1.052025545s] Nov 27 21:48:11.953: INFO: Created: latency-svc-fl6kx Nov 27 21:48:11.967: INFO: Got endpoints: latency-svc-fl6kx [1.034558498s] Nov 27 21:48:12.000: INFO: Created: latency-svc-8gzvj Nov 27 21:48:12.016: INFO: Got endpoints: latency-svc-8gzvj [991.364959ms] Nov 27 21:48:12.076: INFO: Created: latency-svc-5cd4n Nov 27 21:48:12.082: INFO: Got endpoints: latency-svc-5cd4n [1.004983016s] Nov 27 21:48:12.107: INFO: Created: latency-svc-8b6zv Nov 27 21:48:12.125: INFO: Got endpoints: latency-svc-8b6zv [943.771113ms] Nov 27 21:48:12.150: INFO: Created: latency-svc-smt8g Nov 27 21:48:12.167: INFO: Got endpoints: latency-svc-smt8g [964.866755ms] Nov 27 21:48:12.220: INFO: Created: latency-svc-dkglr Nov 27 21:48:12.228: INFO: Got endpoints: latency-svc-dkglr [965.266097ms] Nov 27 21:48:12.254: INFO: Created: latency-svc-lshc9 Nov 27 21:48:12.302: INFO: Got endpoints: latency-svc-lshc9 [947.78685ms] Nov 27 21:48:12.413: INFO: Created: latency-svc-lpthn Nov 27 21:48:12.419: INFO: Got endpoints: latency-svc-lpthn [1.022506063s] Nov 27 21:48:12.475: INFO: Created: latency-svc-7jczb Nov 27 21:48:12.491: INFO: Got endpoints: latency-svc-7jczb [1.007836163s] Nov 27 21:48:12.556: INFO: Created: latency-svc-djjts Nov 27 21:48:12.563: INFO: Got endpoints: latency-svc-djjts [1.027532146s] Nov 27 21:48:12.589: INFO: Created: latency-svc-27nd4 Nov 27 21:48:12.600: INFO: Got endpoints: latency-svc-27nd4 [932.282132ms] Nov 27 21:48:12.642: INFO: Created: latency-svc-2ptrw Nov 27 21:48:12.705: INFO: Got endpoints: latency-svc-2ptrw [990.027452ms] Nov 27 21:48:12.738: INFO: Created: latency-svc-jbmln Nov 27 21:48:12.793: INFO: Got endpoints: latency-svc-jbmln [1.010067599s] Nov 27 21:48:12.860: INFO: Created: latency-svc-nhh6h Nov 27 21:48:12.863: INFO: Got endpoints: latency-svc-nhh6h [1.051841864s] Nov 27 21:48:12.907: INFO: Created: latency-svc-8mq2d Nov 27 21:48:12.912: INFO: Got endpoints: latency-svc-8mq2d [985.632714ms] Nov 27 21:48:12.936: INFO: Created: latency-svc-4pm7n Nov 27 21:48:12.949: INFO: Got endpoints: latency-svc-4pm7n [981.771875ms] Nov 27 21:48:13.004: INFO: Created: latency-svc-c87b2 Nov 27 21:48:13.008: INFO: Got endpoints: latency-svc-c87b2 [991.963995ms] Nov 27 21:48:13.037: INFO: Created: latency-svc-t6fbv Nov 27 21:48:13.051: INFO: Got endpoints: latency-svc-t6fbv [968.395516ms] Nov 27 21:48:13.074: INFO: Created: latency-svc-mdpg2 Nov 27 21:48:13.099: INFO: Got endpoints: latency-svc-mdpg2 [973.729751ms] Nov 27 21:48:13.172: INFO: Created: latency-svc-bkw99 Nov 27 21:48:13.189: INFO: Got endpoints: latency-svc-bkw99 [1.022140538s] Nov 27 21:48:13.217: INFO: Created: latency-svc-wndxq Nov 27 21:48:13.232: INFO: Got endpoints: latency-svc-wndxq [132.899541ms] Nov 27 21:48:13.265: INFO: Created: latency-svc-f5mbn Nov 27 21:48:13.315: INFO: Got endpoints: latency-svc-f5mbn [1.086749084s] Nov 27 21:48:13.342: INFO: Created: latency-svc-7rvpx Nov 27 21:48:13.352: INFO: Got endpoints: latency-svc-7rvpx [1.049302286s] Nov 27 21:48:13.393: INFO: Created: latency-svc-4gcdc Nov 27 21:48:13.395: INFO: Got endpoints: latency-svc-4gcdc [976.164532ms] Nov 27 21:48:13.479: INFO: Created: latency-svc-nrrdp Nov 27 21:48:13.485: INFO: Got endpoints: latency-svc-nrrdp [993.016747ms] Nov 27 21:48:13.523: INFO: Created: latency-svc-frp8x Nov 27 21:48:13.539: INFO: Got endpoints: latency-svc-frp8x [975.628736ms] Nov 27 21:48:13.651: INFO: Created: latency-svc-ft4b7 Nov 27 21:48:13.669: INFO: Got endpoints: latency-svc-ft4b7 [1.068704573s] Nov 27 21:48:13.711: INFO: Created: latency-svc-zqsdd Nov 27 21:48:13.719: INFO: Got endpoints: latency-svc-zqsdd [1.013997247s] Nov 27 21:48:13.744: INFO: Created: latency-svc-rh8qv Nov 27 21:48:13.806: INFO: Got endpoints: latency-svc-rh8qv [1.01304134s] Nov 27 21:48:13.820: INFO: Created: latency-svc-kggxk Nov 27 21:48:13.833: INFO: Got endpoints: latency-svc-kggxk [969.835561ms] Nov 27 21:48:13.884: INFO: Created: latency-svc-kpnmv Nov 27 21:48:13.900: INFO: Got endpoints: latency-svc-kpnmv [987.379231ms] Nov 27 21:48:13.951: INFO: Created: latency-svc-9bhwj Nov 27 21:48:13.954: INFO: Got endpoints: latency-svc-9bhwj [1.004405646s] Nov 27 21:48:13.992: INFO: Created: latency-svc-fgkvf Nov 27 21:48:14.009: INFO: Got endpoints: latency-svc-fgkvf [1.000727441s] Nov 27 21:48:14.032: INFO: Created: latency-svc-pjbvq Nov 27 21:48:14.088: INFO: Got endpoints: latency-svc-pjbvq [1.037196934s] Nov 27 21:48:14.101: INFO: Created: latency-svc-5tb8t Nov 27 21:48:14.117: INFO: Got endpoints: latency-svc-5tb8t [927.251766ms] Nov 27 21:48:14.140: INFO: Created: latency-svc-6zs45 Nov 27 21:48:14.159: INFO: Got endpoints: latency-svc-6zs45 [927.061636ms] Nov 27 21:48:14.182: INFO: Created: latency-svc-vfww9 Nov 27 21:48:14.243: INFO: Got endpoints: latency-svc-vfww9 [928.283423ms] Nov 27 21:48:14.246: INFO: Created: latency-svc-ncvm4 Nov 27 21:48:14.270: INFO: Got endpoints: latency-svc-ncvm4 [918.517791ms] Nov 27 21:48:14.317: INFO: Created: latency-svc-b8vwv Nov 27 21:48:14.375: INFO: Got endpoints: latency-svc-b8vwv [979.416588ms] Nov 27 21:48:14.401: INFO: Created: latency-svc-fzcd4 Nov 27 21:48:14.422: INFO: Got endpoints: latency-svc-fzcd4 [936.957652ms] Nov 27 21:48:14.458: INFO: Created: latency-svc-6m4rd Nov 27 21:48:14.513: INFO: Got endpoints: latency-svc-6m4rd [973.497865ms] Nov 27 21:48:14.544: INFO: Created: latency-svc-gznkb Nov 27 21:48:14.560: INFO: Got endpoints: latency-svc-gznkb [891.028717ms] Nov 27 21:48:14.586: INFO: Created: latency-svc-dd2gh Nov 27 21:48:14.645: INFO: Got endpoints: latency-svc-dd2gh [925.260449ms] Nov 27 21:48:14.662: INFO: Created: latency-svc-wfns5 Nov 27 21:48:14.675: INFO: Got endpoints: latency-svc-wfns5 [868.887624ms] Nov 27 21:48:14.698: INFO: Created: latency-svc-pkhfb Nov 27 21:48:14.717: INFO: Got endpoints: latency-svc-pkhfb [883.806091ms] Nov 27 21:48:14.783: INFO: Created: latency-svc-7mq49 Nov 27 21:48:14.786: INFO: Got endpoints: latency-svc-7mq49 [885.953017ms] Nov 27 21:48:14.832: INFO: Created: latency-svc-w6pnv Nov 27 21:48:14.849: INFO: Got endpoints: latency-svc-w6pnv [894.996634ms] Nov 27 21:48:14.874: INFO: Created: latency-svc-cftnw Nov 27 21:48:14.920: INFO: Got endpoints: latency-svc-cftnw [910.71016ms] Nov 27 21:48:14.934: INFO: Created: latency-svc-c58th Nov 27 21:48:14.952: INFO: Got endpoints: latency-svc-c58th [863.271129ms] Nov 27 21:48:14.975: INFO: Created: latency-svc-7jdhs Nov 27 21:48:14.988: INFO: Got endpoints: latency-svc-7jdhs [870.971921ms] Nov 27 21:48:15.010: INFO: Created: latency-svc-j9mvf Nov 27 21:48:15.018: INFO: Got endpoints: latency-svc-j9mvf [858.795547ms] Nov 27 21:48:15.071: INFO: Created: latency-svc-h6vmz Nov 27 21:48:15.091: INFO: Got endpoints: latency-svc-h6vmz [847.188419ms] Nov 27 21:48:15.126: INFO: Created: latency-svc-jmb5l Nov 27 21:48:15.145: INFO: Got endpoints: latency-svc-jmb5l [874.552041ms] Nov 27 21:48:15.227: INFO: Created: latency-svc-gnllf Nov 27 21:48:15.268: INFO: Got endpoints: latency-svc-gnllf [892.266096ms] Nov 27 21:48:15.304: INFO: Created: latency-svc-hvnlr Nov 27 21:48:15.320: INFO: Got endpoints: latency-svc-hvnlr [897.587257ms] Nov 27 21:48:15.382: INFO: Created: latency-svc-t5rmv Nov 27 21:48:15.397: INFO: Got endpoints: latency-svc-t5rmv [884.539499ms] Nov 27 21:48:15.425: INFO: Created: latency-svc-fsb78 Nov 27 21:48:15.451: INFO: Got endpoints: latency-svc-fsb78 [891.165918ms] Nov 27 21:48:15.555: INFO: Created: latency-svc-c5pj8 Nov 27 21:48:15.563: INFO: Got endpoints: latency-svc-c5pj8 [917.742224ms] Nov 27 21:48:15.606: INFO: Created: latency-svc-w8zqb Nov 27 21:48:15.638: INFO: Got endpoints: latency-svc-w8zqb [962.453831ms] Nov 27 21:48:15.690: INFO: Created: latency-svc-kpqmn Nov 27 21:48:15.704: INFO: Got endpoints: latency-svc-kpqmn [986.083115ms] Nov 27 21:48:15.729: INFO: Created: latency-svc-nkkvf Nov 27 21:48:15.746: INFO: Got endpoints: latency-svc-nkkvf [959.672055ms] Nov 27 21:48:15.772: INFO: Created: latency-svc-lkgcq Nov 27 21:48:15.812: INFO: Got endpoints: latency-svc-lkgcq [962.53768ms] Nov 27 21:48:15.825: INFO: Created: latency-svc-j2l5n Nov 27 21:48:15.843: INFO: Got endpoints: latency-svc-j2l5n [922.634397ms] Nov 27 21:48:15.875: INFO: Created: latency-svc-9hcbl Nov 27 21:48:15.898: INFO: Got endpoints: latency-svc-9hcbl [945.560465ms] Nov 27 21:48:15.938: INFO: Created: latency-svc-fl5bx Nov 27 21:48:15.941: INFO: Got endpoints: latency-svc-fl5bx [952.889193ms] Nov 27 21:48:15.971: INFO: Created: latency-svc-n4ld6 Nov 27 21:48:15.987: INFO: Got endpoints: latency-svc-n4ld6 [968.86515ms] Nov 27 21:48:16.011: INFO: Created: latency-svc-bdn5s Nov 27 21:48:16.030: INFO: Got endpoints: latency-svc-bdn5s [938.862924ms] Nov 27 21:48:16.070: INFO: Created: latency-svc-t272z Nov 27 21:48:16.074: INFO: Got endpoints: latency-svc-t272z [928.537134ms] Nov 27 21:48:16.101: INFO: Created: latency-svc-fjtq7 Nov 27 21:48:16.125: INFO: Got endpoints: latency-svc-fjtq7 [856.615985ms] Nov 27 21:48:16.158: INFO: Created: latency-svc-6sk77 Nov 27 21:48:16.208: INFO: Got endpoints: latency-svc-6sk77 [887.77753ms] Nov 27 21:48:16.216: INFO: Created: latency-svc-6wd27 Nov 27 21:48:16.235: INFO: Got endpoints: latency-svc-6wd27 [837.076205ms] Nov 27 21:48:16.264: INFO: Created: latency-svc-fgvmb Nov 27 21:48:16.277: INFO: Got endpoints: latency-svc-fgvmb [825.307279ms] Nov 27 21:48:16.346: INFO: Created: latency-svc-vqxxd Nov 27 21:48:16.377: INFO: Got endpoints: latency-svc-vqxxd [814.270712ms] Nov 27 21:48:16.378: INFO: Created: latency-svc-ffqc8 Nov 27 21:48:16.397: INFO: Got endpoints: latency-svc-ffqc8 [759.275562ms] Nov 27 21:48:16.420: INFO: Created: latency-svc-dx8vq Nov 27 21:48:16.428: INFO: Got endpoints: latency-svc-dx8vq [723.965945ms] Nov 27 21:48:16.489: INFO: Created: latency-svc-7d6p5 Nov 27 21:48:16.492: INFO: Got endpoints: latency-svc-7d6p5 [745.618468ms] Nov 27 21:48:16.541: INFO: Created: latency-svc-8mzqw Nov 27 21:48:16.554: INFO: Got endpoints: latency-svc-8mzqw [741.019081ms] Nov 27 21:48:16.653: INFO: Created: latency-svc-lncv5 Nov 27 21:48:16.654: INFO: Got endpoints: latency-svc-lncv5 [810.31088ms] Nov 27 21:48:16.700: INFO: Created: latency-svc-6vzwb Nov 27 21:48:16.716: INFO: Got endpoints: latency-svc-6vzwb [818.188885ms] Nov 27 21:48:16.812: INFO: Created: latency-svc-8d8w9 Nov 27 21:48:16.815: INFO: Got endpoints: latency-svc-8d8w9 [873.946822ms] Nov 27 21:48:16.858: INFO: Created: latency-svc-vrl4x Nov 27 21:48:16.892: INFO: Got endpoints: latency-svc-vrl4x [904.851209ms] Nov 27 21:48:16.950: INFO: Created: latency-svc-d5zz8 Nov 27 21:48:16.953: INFO: Got endpoints: latency-svc-d5zz8 [922.621568ms] Nov 27 21:48:16.983: INFO: Created: latency-svc-bns52 Nov 27 21:48:16.999: INFO: Got endpoints: latency-svc-bns52 [925.06951ms] Nov 27 21:48:17.024: INFO: Created: latency-svc-65sgj Nov 27 21:48:17.035: INFO: Got endpoints: latency-svc-65sgj [909.926877ms] Nov 27 21:48:17.117: INFO: Created: latency-svc-tq6d4 Nov 27 21:48:17.120: INFO: Got endpoints: latency-svc-tq6d4 [912.157282ms] Nov 27 21:48:17.205: INFO: Created: latency-svc-4ppk5 Nov 27 21:48:17.243: INFO: Got endpoints: latency-svc-4ppk5 [1.008186017s] Nov 27 21:48:17.265: INFO: Created: latency-svc-h7nqw Nov 27 21:48:17.282: INFO: Got endpoints: latency-svc-h7nqw [1.004705954s] Nov 27 21:48:17.313: INFO: Created: latency-svc-qp5db Nov 27 21:48:17.324: INFO: Got endpoints: latency-svc-qp5db [946.349889ms] Nov 27 21:48:17.396: INFO: Created: latency-svc-dqls8 Nov 27 21:48:17.416: INFO: Got endpoints: latency-svc-dqls8 [1.018227608s] Nov 27 21:48:17.455: INFO: Created: latency-svc-72hb5 Nov 27 21:48:17.480: INFO: Got endpoints: latency-svc-72hb5 [1.051955959s] Nov 27 21:48:17.529: INFO: Created: latency-svc-mnsq2 Nov 27 21:48:17.553: INFO: Got endpoints: latency-svc-mnsq2 [1.060744325s] Nov 27 21:48:17.589: INFO: Created: latency-svc-tmcrk Nov 27 21:48:17.711: INFO: Got endpoints: latency-svc-tmcrk [1.156712332s] Nov 27 21:48:17.738: INFO: Created: latency-svc-w8snx Nov 27 21:48:17.775: INFO: Got endpoints: latency-svc-w8snx [1.121287952s] Nov 27 21:48:17.806: INFO: Created: latency-svc-2bf6h Nov 27 21:48:17.854: INFO: Got endpoints: latency-svc-2bf6h [1.137910761s] Nov 27 21:48:17.865: INFO: Created: latency-svc-hrhvc Nov 27 21:48:17.931: INFO: Got endpoints: latency-svc-hrhvc [1.115876954s] Nov 27 21:48:17.998: INFO: Created: latency-svc-msrqv Nov 27 21:48:18.001: INFO: Got endpoints: latency-svc-msrqv [1.108733239s] Nov 27 21:48:18.028: INFO: Created: latency-svc-m4zjg Nov 27 21:48:18.044: INFO: Got endpoints: latency-svc-m4zjg [1.090466795s] Nov 27 21:48:18.094: INFO: Created: latency-svc-dfkgr Nov 27 21:48:18.141: INFO: Got endpoints: latency-svc-dfkgr [1.142166244s] Nov 27 21:48:18.147: INFO: Created: latency-svc-7jtfx Nov 27 21:48:18.164: INFO: Got endpoints: latency-svc-7jtfx [1.128574354s] Nov 27 21:48:18.218: INFO: Created: latency-svc-pggvf Nov 27 21:48:18.232: INFO: Got endpoints: latency-svc-pggvf [1.111359541s] Nov 27 21:48:18.285: INFO: Created: latency-svc-cgc92 Nov 27 21:48:18.288: INFO: Got endpoints: latency-svc-cgc92 [1.044315225s] Nov 27 21:48:18.328: INFO: Created: latency-svc-kkx5v Nov 27 21:48:18.345: INFO: Got endpoints: latency-svc-kkx5v [1.063194106s] Nov 27 21:48:18.364: INFO: Created: latency-svc-62nrf Nov 27 21:48:18.375: INFO: Got endpoints: latency-svc-62nrf [1.05118849s] Nov 27 21:48:18.418: INFO: Created: latency-svc-wlx6v Nov 27 21:48:18.441: INFO: Got endpoints: latency-svc-wlx6v [1.02523854s] Nov 27 21:48:18.464: INFO: Created: latency-svc-hndqh Nov 27 21:48:18.477: INFO: Got endpoints: latency-svc-hndqh [996.792779ms] Nov 27 21:48:18.506: INFO: Created: latency-svc-w42bg Nov 27 21:48:18.578: INFO: Got endpoints: latency-svc-w42bg [1.025439376s] Nov 27 21:48:18.581: INFO: Created: latency-svc-nprh7 Nov 27 21:48:18.586: INFO: Got endpoints: latency-svc-nprh7 [874.766888ms] Nov 27 21:48:18.603: INFO: Created: latency-svc-6f7n8 Nov 27 21:48:18.616: INFO: Got endpoints: latency-svc-6f7n8 [840.537037ms] Nov 27 21:48:18.652: INFO: Created: latency-svc-7r2m9 Nov 27 21:48:18.664: INFO: Got endpoints: latency-svc-7r2m9 [809.905546ms] Nov 27 21:48:18.735: INFO: Created: latency-svc-6nq89 Nov 27 21:48:18.738: INFO: Got endpoints: latency-svc-6nq89 [806.456264ms] Nov 27 21:48:18.783: INFO: Created: latency-svc-xk4dd Nov 27 21:48:18.803: INFO: Got endpoints: latency-svc-xk4dd [802.019211ms] Nov 27 21:48:18.891: INFO: Created: latency-svc-x9cnk Nov 27 21:48:18.892: INFO: Got endpoints: latency-svc-x9cnk [848.249012ms] Nov 27 21:48:18.946: INFO: Created: latency-svc-xxg2b Nov 27 21:48:18.960: INFO: Got endpoints: latency-svc-xxg2b [818.359821ms] Nov 27 21:48:18.982: INFO: Created: latency-svc-skh5g Nov 27 21:48:19.028: INFO: Got endpoints: latency-svc-skh5g [864.303134ms] Nov 27 21:48:19.033: INFO: Created: latency-svc-64n7s Nov 27 21:48:19.057: INFO: Got endpoints: latency-svc-64n7s [825.7153ms] Nov 27 21:48:19.100: INFO: Created: latency-svc-8nm49 Nov 27 21:48:19.110: INFO: Got endpoints: latency-svc-8nm49 [822.094158ms] Nov 27 21:48:19.172: INFO: Created: latency-svc-rk6v9 Nov 27 21:48:19.175: INFO: Got endpoints: latency-svc-rk6v9 [829.505197ms] Nov 27 21:48:19.215: INFO: Created: latency-svc-55p9d Nov 27 21:48:19.233: INFO: Got endpoints: latency-svc-55p9d [857.314172ms] Nov 27 21:48:19.263: INFO: Created: latency-svc-qdf22 Nov 27 21:48:19.315: INFO: Got endpoints: latency-svc-qdf22 [873.852176ms] Nov 27 21:48:19.352: INFO: Created: latency-svc-bh9bj Nov 27 21:48:19.375: INFO: Got endpoints: latency-svc-bh9bj [897.425225ms] Nov 27 21:48:19.460: INFO: Created: latency-svc-n2lmm Nov 27 21:48:19.462: INFO: Got endpoints: latency-svc-n2lmm [883.783498ms] Nov 27 21:48:19.485: INFO: Created: latency-svc-84pq2 Nov 27 21:48:19.502: INFO: Got endpoints: latency-svc-84pq2 [916.076262ms] Nov 27 21:48:19.534: INFO: Created: latency-svc-cn852 Nov 27 21:48:19.550: INFO: Got endpoints: latency-svc-cn852 [933.862284ms] Nov 27 21:48:19.621: INFO: Created: latency-svc-clvwv Nov 27 21:48:19.624: INFO: Got endpoints: latency-svc-clvwv [959.127247ms] Nov 27 21:48:19.718: INFO: Created: latency-svc-cnmnf Nov 27 21:48:19.753: INFO: Got endpoints: latency-svc-cnmnf [1.014674093s] Nov 27 21:48:19.785: INFO: Created: latency-svc-m5ptj Nov 27 21:48:19.790: INFO: Got endpoints: latency-svc-m5ptj [986.380179ms] Nov 27 21:48:19.809: INFO: Created: latency-svc-jrh6v Nov 27 21:48:19.821: INFO: Got endpoints: latency-svc-jrh6v [928.569295ms] Nov 27 21:48:19.845: INFO: Created: latency-svc-klcsv Nov 27 21:48:19.925: INFO: Got endpoints: latency-svc-klcsv [964.821535ms] Nov 27 21:48:19.929: INFO: Created: latency-svc-kddg4 Nov 27 21:48:19.935: INFO: Got endpoints: latency-svc-kddg4 [906.775969ms] Nov 27 21:48:19.963: INFO: Created: latency-svc-xx9hn Nov 27 21:48:19.971: INFO: Got endpoints: latency-svc-xx9hn [913.28023ms] Nov 27 21:48:19.993: INFO: Created: latency-svc-gx9d6 Nov 27 21:48:20.002: INFO: Got endpoints: latency-svc-gx9d6 [891.268159ms] Nov 27 21:48:20.019: INFO: Created: latency-svc-rrh76 Nov 27 21:48:20.051: INFO: Got endpoints: latency-svc-rrh76 [875.957131ms] Nov 27 21:48:20.061: INFO: Created: latency-svc-6gdp2 Nov 27 21:48:20.074: INFO: Got endpoints: latency-svc-6gdp2 [841.544886ms] Nov 27 21:48:20.091: INFO: Created: latency-svc-sb4d8 Nov 27 21:48:20.104: INFO: Got endpoints: latency-svc-sb4d8 [788.895543ms] Nov 27 21:48:20.119: INFO: Created: latency-svc-8rpx4 Nov 27 21:48:20.135: INFO: Got endpoints: latency-svc-8rpx4 [759.576701ms] Nov 27 21:48:20.196: INFO: Created: latency-svc-gwqn9 Nov 27 21:48:20.221: INFO: Created: latency-svc-wshl9 Nov 27 21:48:20.221: INFO: Got endpoints: latency-svc-gwqn9 [758.758635ms] Nov 27 21:48:20.238: INFO: Got endpoints: latency-svc-wshl9 [735.742617ms] Nov 27 21:48:20.258: INFO: Created: latency-svc-vnhc4 Nov 27 21:48:20.274: INFO: Got endpoints: latency-svc-vnhc4 [723.634841ms] Nov 27 21:48:20.341: INFO: Created: latency-svc-76gxx Nov 27 21:48:20.341: INFO: Got endpoints: latency-svc-76gxx [717.383866ms] Nov 27 21:48:20.367: INFO: Created: latency-svc-zrlg2 Nov 27 21:48:20.377: INFO: Got endpoints: latency-svc-zrlg2 [623.75384ms] Nov 27 21:48:20.409: INFO: Created: latency-svc-zh2gj Nov 27 21:48:20.425: INFO: Got endpoints: latency-svc-zh2gj [634.765952ms] Nov 27 21:48:20.489: INFO: Created: latency-svc-n7w7p Nov 27 21:48:20.496: INFO: Got endpoints: latency-svc-n7w7p [675.143097ms] Nov 27 21:48:20.527: INFO: Created: latency-svc-4mvw7 Nov 27 21:48:20.539: INFO: Got endpoints: latency-svc-4mvw7 [613.634316ms] Nov 27 21:48:20.584: INFO: Created: latency-svc-djd46 Nov 27 21:48:20.632: INFO: Got endpoints: latency-svc-djd46 [696.676745ms] Nov 27 21:48:20.637: INFO: Created: latency-svc-2xr4r Nov 27 21:48:20.660: INFO: Got endpoints: latency-svc-2xr4r [688.43218ms] Nov 27 21:48:20.677: INFO: Created: latency-svc-kc7kz Nov 27 21:48:20.690: INFO: Got endpoints: latency-svc-kc7kz [687.920475ms] Nov 27 21:48:20.706: INFO: Created: latency-svc-9nqqh Nov 27 21:48:20.720: INFO: Got endpoints: latency-svc-9nqqh [668.458563ms] Nov 27 21:48:20.789: INFO: Created: latency-svc-95hck Nov 27 21:48:20.793: INFO: Got endpoints: latency-svc-95hck [717.98482ms] Nov 27 21:48:20.817: INFO: Created: latency-svc-9vk74 Nov 27 21:48:20.841: INFO: Got endpoints: latency-svc-9vk74 [736.228246ms] Nov 27 21:48:20.877: INFO: Created: latency-svc-5hz48 Nov 27 21:48:20.944: INFO: Got endpoints: latency-svc-5hz48 [809.174075ms] Nov 27 21:48:20.947: INFO: Created: latency-svc-x4xgz Nov 27 21:48:20.955: INFO: Got endpoints: latency-svc-x4xgz [733.260285ms] Nov 27 21:48:20.984: INFO: Created: latency-svc-8wh6z Nov 27 21:48:20.998: INFO: Got endpoints: latency-svc-8wh6z [759.755389ms] Nov 27 21:48:21.019: INFO: Created: latency-svc-q9c9h Nov 27 21:48:21.033: INFO: Got endpoints: latency-svc-q9c9h [759.028087ms] Nov 27 21:48:21.082: INFO: Created: latency-svc-f8bkf Nov 27 21:48:21.129: INFO: Got endpoints: latency-svc-f8bkf [787.246122ms] Nov 27 21:48:21.131: INFO: Created: latency-svc-dr6g8 Nov 27 21:48:21.142: INFO: Got endpoints: latency-svc-dr6g8 [764.781786ms] Nov 27 21:48:21.171: INFO: Created: latency-svc-fnmkp Nov 27 21:48:21.207: INFO: Got endpoints: latency-svc-fnmkp [782.238872ms] Nov 27 21:48:21.218: INFO: Created: latency-svc-9klf9 Nov 27 21:48:21.245: INFO: Got endpoints: latency-svc-9klf9 [748.412181ms] Nov 27 21:48:21.278: INFO: Created: latency-svc-29chv Nov 27 21:48:21.293: INFO: Got endpoints: latency-svc-29chv [753.661993ms] Nov 27 21:48:21.351: INFO: Created: latency-svc-rqh9d Nov 27 21:48:21.386: INFO: Got endpoints: latency-svc-rqh9d [753.279376ms] Nov 27 21:48:21.387: INFO: Created: latency-svc-m9pnj Nov 27 21:48:21.408: INFO: Got endpoints: latency-svc-m9pnj [748.547919ms] Nov 27 21:48:21.434: INFO: Created: latency-svc-c5wsv Nov 27 21:48:21.451: INFO: Got endpoints: latency-svc-c5wsv [760.805096ms] Nov 27 21:48:21.494: INFO: Created: latency-svc-2xd5q Nov 27 21:48:21.525: INFO: Got endpoints: latency-svc-2xd5q [804.725546ms] Nov 27 21:48:21.558: INFO: Created: latency-svc-h42h6 Nov 27 21:48:21.571: INFO: Got endpoints: latency-svc-h42h6 [777.732202ms] Nov 27 21:48:21.589: INFO: Created: latency-svc-mpzdf Nov 27 21:48:21.657: INFO: Got endpoints: latency-svc-mpzdf [816.247526ms] Nov 27 21:48:21.686: INFO: Created: latency-svc-7ztfc Nov 27 21:48:21.703: INFO: Got endpoints: latency-svc-7ztfc [758.432481ms] Nov 27 21:48:21.744: INFO: Created: latency-svc-5vqvj Nov 27 21:48:21.782: INFO: Got endpoints: latency-svc-5vqvj [827.356415ms] Nov 27 21:48:21.817: INFO: Created: latency-svc-wrzrg Nov 27 21:48:21.842: INFO: Got endpoints: latency-svc-wrzrg [843.901459ms] Nov 27 21:48:21.920: INFO: Created: latency-svc-hb2vw Nov 27 21:48:21.922: INFO: Got endpoints: latency-svc-hb2vw [888.409742ms] Nov 27 21:48:21.951: INFO: Created: latency-svc-r776l Nov 27 21:48:21.963: INFO: Got endpoints: latency-svc-r776l [833.567236ms] Nov 27 21:48:21.980: INFO: Created: latency-svc-vlrpm Nov 27 21:48:21.992: INFO: Got endpoints: latency-svc-vlrpm [849.907511ms] Nov 27 21:48:22.014: INFO: Created: latency-svc-sx2kp Nov 27 21:48:22.051: INFO: Got endpoints: latency-svc-sx2kp [843.937555ms] Nov 27 21:48:22.068: INFO: Created: latency-svc-9sj84 Nov 27 21:48:22.083: INFO: Got endpoints: latency-svc-9sj84 [838.246641ms] Nov 27 21:48:22.104: INFO: Created: latency-svc-58j7w Nov 27 21:48:22.119: INFO: Got endpoints: latency-svc-58j7w [825.682663ms] Nov 27 21:48:22.142: INFO: Created: latency-svc-2tpkv Nov 27 21:48:22.207: INFO: Got endpoints: latency-svc-2tpkv [820.659984ms] Nov 27 21:48:22.209: INFO: Created: latency-svc-r246c Nov 27 21:48:22.215: INFO: Got endpoints: latency-svc-r246c [806.303525ms] Nov 27 21:48:22.238: INFO: Created: latency-svc-6wknn Nov 27 21:48:22.247: INFO: Got endpoints: latency-svc-6wknn [795.675248ms] Nov 27 21:48:22.268: INFO: Created: latency-svc-8fhbd Nov 27 21:48:22.277: INFO: Got endpoints: latency-svc-8fhbd [751.339684ms] Nov 27 21:48:22.296: INFO: Created: latency-svc-skppp Nov 27 21:48:22.381: INFO: Got endpoints: latency-svc-skppp [810.074005ms] Nov 27 21:48:22.384: INFO: Created: latency-svc-8s8pp Nov 27 21:48:22.391: INFO: Got endpoints: latency-svc-8s8pp [733.212212ms] Nov 27 21:48:22.412: INFO: Created: latency-svc-74hjk Nov 27 21:48:22.427: INFO: Got endpoints: latency-svc-74hjk [723.74436ms] Nov 27 21:48:22.448: INFO: Created: latency-svc-l24bh Nov 27 21:48:22.458: INFO: Got endpoints: latency-svc-l24bh [675.778411ms] Nov 27 21:48:22.520: INFO: Created: latency-svc-68xb9 Nov 27 21:48:22.549: INFO: Got endpoints: latency-svc-68xb9 [706.486376ms] Nov 27 21:48:22.583: INFO: Created: latency-svc-f77hg Nov 27 21:48:22.596: INFO: Got endpoints: latency-svc-f77hg [673.586189ms] Nov 27 21:48:22.645: INFO: Created: latency-svc-ftdsk Nov 27 21:48:22.656: INFO: Got endpoints: latency-svc-ftdsk [693.39333ms] Nov 27 21:48:22.693: INFO: Created: latency-svc-pzshh Nov 27 21:48:22.717: INFO: Got endpoints: latency-svc-pzshh [724.850853ms] Nov 27 21:48:22.718: INFO: Latencies: [58.263708ms 132.899541ms 150.809347ms 201.924381ms 304.123082ms 327.298216ms 387.731295ms 480.311884ms 522.617014ms 608.947627ms 613.634316ms 623.75384ms 634.765952ms 661.64596ms 668.458563ms 673.586189ms 675.143097ms 675.778411ms 687.920475ms 688.43218ms 693.39333ms 696.676745ms 706.486376ms 717.383866ms 717.98482ms 723.634841ms 723.74436ms 723.965945ms 724.850853ms 733.212212ms 733.260285ms 735.742617ms 736.228246ms 741.019081ms 745.618468ms 748.412181ms 748.547919ms 751.339684ms 753.279376ms 753.661993ms 758.432481ms 758.758635ms 759.028087ms 759.275562ms 759.576701ms 759.755389ms 760.805096ms 764.781786ms 777.732202ms 782.238872ms 787.246122ms 788.895543ms 792.372473ms 795.675248ms 802.019211ms 804.725546ms 806.303525ms 806.456264ms 809.174075ms 809.905546ms 810.074005ms 810.31088ms 814.270712ms 816.247526ms 818.188885ms 818.359821ms 820.659984ms 822.094158ms 825.307279ms 825.682663ms 825.7153ms 827.356415ms 829.505197ms 833.567236ms 837.076205ms 838.246641ms 839.694757ms 840.537037ms 841.544886ms 843.901459ms 843.937555ms 847.188419ms 848.249012ms 849.907511ms 856.615985ms 857.314172ms 858.795547ms 863.271129ms 864.303134ms 868.887624ms 870.971921ms 873.852176ms 873.946822ms 874.552041ms 874.766888ms 875.957131ms 883.783498ms 883.806091ms 884.539499ms 885.953017ms 887.77753ms 888.409742ms 891.028717ms 891.165918ms 891.268159ms 892.266096ms 894.996634ms 897.425225ms 897.587257ms 904.851209ms 906.775969ms 908.03294ms 909.926877ms 910.71016ms 912.157282ms 913.28023ms 916.076262ms 917.742224ms 918.517791ms 922.621568ms 922.634397ms 925.06951ms 925.260449ms 927.061636ms 927.251766ms 928.283423ms 928.537134ms 928.569295ms 932.282132ms 933.862284ms 936.957652ms 937.912418ms 938.862924ms 943.771113ms 945.560465ms 946.349889ms 947.78685ms 952.889193ms 959.127247ms 959.672055ms 962.453831ms 962.53768ms 964.821535ms 964.866755ms 965.266097ms 968.395516ms 968.86515ms 969.835561ms 973.497865ms 973.729751ms 975.628736ms 976.164532ms 979.416588ms 981.771875ms 985.632714ms 986.083115ms 986.380179ms 987.379231ms 990.027452ms 991.364959ms 991.963995ms 993.016747ms 996.792779ms 1.000727441s 1.004405646s 1.004705954s 1.004983016s 1.007836163s 1.008186017s 1.010067599s 1.01304134s 1.013997247s 1.014674093s 1.018227608s 1.022140538s 1.022506063s 1.02523854s 1.025439376s 1.027532146s 1.034558498s 1.037196934s 1.044315225s 1.049302286s 1.05118849s 1.051841864s 1.051955959s 1.052025545s 1.060744325s 1.063194106s 1.068704573s 1.086749084s 1.090466795s 1.108733239s 1.111359541s 1.115876954s 1.121287952s 1.128574354s 1.137910761s 1.142166244s 1.156712332s] Nov 27 21:48:22.720: INFO: 50 %ile: 887.77753ms Nov 27 21:48:22.720: INFO: 90 %ile: 1.037196934s Nov 27 21:48:22.720: INFO: 99 %ile: 1.142166244s Nov 27 21:48:22.720: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:48:22.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5094" for this suite. Nov 27 21:48:56.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:48:56.952: INFO: namespace svc-latency-5094 deletion completed in 34.22391625s • [SLOW TEST:50.408 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:48:56.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Nov 27 21:49:04.418: INFO: 6 pods remaining Nov 27 21:49:04.418: INFO: 0 pods has nil DeletionTimestamp Nov 27 21:49:04.418: INFO: Nov 27 21:49:05.374: INFO: 0 pods remaining Nov 27 21:49:05.374: INFO: 0 pods has nil DeletionTimestamp Nov 27 21:49:05.375: INFO: STEP: Gathering metrics W1127 21:49:05.902740 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 21:49:05.902: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:49:05.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7783" for this suite. Nov 27 21:49:12.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:49:12.443: INFO: namespace gc-7783 deletion completed in 6.533987099s • [SLOW TEST:15.485 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:49:12.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Nov 27 21:49:17.104: INFO: Successfully updated pod "annotationupdate658c6c25-de58-49d4-9b5a-7884e770bfb9" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:49:19.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7807" for this suite. Nov 27 21:49:41.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:49:41.315: INFO: namespace projected-7807 deletion completed in 22.180603608s • [SLOW TEST:28.870 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:49:41.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-39092740-31aa-413f-b601-d7d3d95cad5e STEP: Creating a pod to test consume secrets Nov 27 21:49:41.411: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e061481-f88e-4eef-8c8e-55dc6645a384" in namespace "projected-8787" to be "success or failure" Nov 27 21:49:41.435: INFO: Pod "pod-projected-secrets-5e061481-f88e-4eef-8c8e-55dc6645a384": Phase="Pending", Reason="", readiness=false. Elapsed: 23.78854ms Nov 27 21:49:43.441: INFO: Pod "pod-projected-secrets-5e061481-f88e-4eef-8c8e-55dc6645a384": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029819802s Nov 27 21:49:45.448: INFO: Pod "pod-projected-secrets-5e061481-f88e-4eef-8c8e-55dc6645a384": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036549784s STEP: Saw pod success Nov 27 21:49:45.448: INFO: Pod "pod-projected-secrets-5e061481-f88e-4eef-8c8e-55dc6645a384" satisfied condition "success or failure" Nov 27 21:49:45.453: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-5e061481-f88e-4eef-8c8e-55dc6645a384 container projected-secret-volume-test: STEP: delete the pod Nov 27 21:49:45.494: INFO: Waiting for pod pod-projected-secrets-5e061481-f88e-4eef-8c8e-55dc6645a384 to disappear Nov 27 21:49:45.534: INFO: Pod pod-projected-secrets-5e061481-f88e-4eef-8c8e-55dc6645a384 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:49:45.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8787" for this suite. Nov 27 21:49:51.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:49:51.801: INFO: namespace projected-8787 deletion completed in 6.25856147s • [SLOW TEST:10.483 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:49:51.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Nov 27 21:49:51.900: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:49:53.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4074" for this suite. Nov 27 21:49:59.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:49:59.373: INFO: namespace kubectl-4074 deletion completed in 6.219882329s • [SLOW TEST:7.571 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:49:59.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-d7812548-069f-4551-abbe-8f752114ffbb STEP: Creating a pod to test consume configMaps Nov 27 21:49:59.440: INFO: Waiting up to 5m0s for pod "pod-configmaps-e50fed92-dc5c-4e67-bc10-2b0702c39d0f" in namespace "configmap-7267" to be "success or failure" Nov 27 21:49:59.456: INFO: Pod "pod-configmaps-e50fed92-dc5c-4e67-bc10-2b0702c39d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.987128ms Nov 27 21:50:01.463: INFO: Pod "pod-configmaps-e50fed92-dc5c-4e67-bc10-2b0702c39d0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022193078s Nov 27 21:50:03.470: INFO: Pod "pod-configmaps-e50fed92-dc5c-4e67-bc10-2b0702c39d0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029748246s STEP: Saw pod success Nov 27 21:50:03.471: INFO: Pod "pod-configmaps-e50fed92-dc5c-4e67-bc10-2b0702c39d0f" satisfied condition "success or failure" Nov 27 21:50:03.475: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e50fed92-dc5c-4e67-bc10-2b0702c39d0f container configmap-volume-test: STEP: delete the pod Nov 27 21:50:03.498: INFO: Waiting for pod pod-configmaps-e50fed92-dc5c-4e67-bc10-2b0702c39d0f to disappear Nov 27 21:50:03.532: INFO: Pod pod-configmaps-e50fed92-dc5c-4e67-bc10-2b0702c39d0f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:50:03.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7267" for this suite. Nov 27 21:50:09.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:50:09.718: INFO: namespace configmap-7267 deletion completed in 6.175958s • [SLOW TEST:10.343 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:50:09.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Nov 27 21:50:09.817: INFO: Waiting up to 5m0s for pod "pod-0a0592d4-e96d-4c4a-bf1e-a08e2e98875b" in namespace "emptydir-7062" to be "success or failure" Nov 27 21:50:09.831: INFO: Pod "pod-0a0592d4-e96d-4c4a-bf1e-a08e2e98875b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.010969ms Nov 27 21:50:11.838: INFO: Pod "pod-0a0592d4-e96d-4c4a-bf1e-a08e2e98875b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020431629s Nov 27 21:50:13.844: INFO: Pod "pod-0a0592d4-e96d-4c4a-bf1e-a08e2e98875b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026485821s STEP: Saw pod success Nov 27 21:50:13.844: INFO: Pod "pod-0a0592d4-e96d-4c4a-bf1e-a08e2e98875b" satisfied condition "success or failure" Nov 27 21:50:13.848: INFO: Trying to get logs from node iruya-worker pod pod-0a0592d4-e96d-4c4a-bf1e-a08e2e98875b container test-container: STEP: delete the pod Nov 27 21:50:13.870: INFO: Waiting for pod pod-0a0592d4-e96d-4c4a-bf1e-a08e2e98875b to disappear Nov 27 21:50:13.874: INFO: Pod pod-0a0592d4-e96d-4c4a-bf1e-a08e2e98875b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:50:13.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7062" for this suite. Nov 27 21:50:19.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:50:20.126: INFO: namespace emptydir-7062 deletion completed in 6.242961642s • [SLOW TEST:10.406 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:50:20.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Nov 27 21:50:20.202: INFO: Waiting up to 5m0s for pod "client-containers-08d28c05-2ce4-4c37-b08c-9db1f9bbe86b" in namespace "containers-3183" to be "success or failure" Nov 27 21:50:20.259: INFO: Pod "client-containers-08d28c05-2ce4-4c37-b08c-9db1f9bbe86b": Phase="Pending", Reason="", readiness=false. Elapsed: 56.588129ms Nov 27 21:50:22.265: INFO: Pod "client-containers-08d28c05-2ce4-4c37-b08c-9db1f9bbe86b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062555742s Nov 27 21:50:24.271: INFO: Pod "client-containers-08d28c05-2ce4-4c37-b08c-9db1f9bbe86b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068729246s STEP: Saw pod success Nov 27 21:50:24.271: INFO: Pod "client-containers-08d28c05-2ce4-4c37-b08c-9db1f9bbe86b" satisfied condition "success or failure" Nov 27 21:50:24.276: INFO: Trying to get logs from node iruya-worker2 pod client-containers-08d28c05-2ce4-4c37-b08c-9db1f9bbe86b container test-container: STEP: delete the pod Nov 27 21:50:24.345: INFO: Waiting for pod client-containers-08d28c05-2ce4-4c37-b08c-9db1f9bbe86b to disappear Nov 27 21:50:24.389: INFO: Pod client-containers-08d28c05-2ce4-4c37-b08c-9db1f9bbe86b no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:50:24.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3183" for this suite. Nov 27 21:50:30.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:50:30.588: INFO: namespace containers-3183 deletion completed in 6.188410083s • [SLOW TEST:10.460 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:50:30.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-546dc89c-a039-4242-99db-6625454a141c [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:50:30.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8322" for this suite. Nov 27 21:50:36.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:50:36.985: INFO: namespace configmap-8322 deletion completed in 6.210987514s • [SLOW TEST:6.395 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:50:36.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2eccc61c-4c0e-4543-b3fb-4caed641e53a STEP: Creating a pod to test consume secrets Nov 27 21:50:37.120: INFO: Waiting up to 5m0s for pod "pod-secrets-397682bc-bb81-49dd-953f-96baa0fee0a3" in namespace "secrets-7584" to be "success or failure" Nov 27 21:50:37.138: INFO: Pod "pod-secrets-397682bc-bb81-49dd-953f-96baa0fee0a3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.19644ms Nov 27 21:50:39.211: INFO: Pod "pod-secrets-397682bc-bb81-49dd-953f-96baa0fee0a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091393151s Nov 27 21:50:41.217: INFO: Pod "pod-secrets-397682bc-bb81-49dd-953f-96baa0fee0a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097293198s STEP: Saw pod success Nov 27 21:50:41.218: INFO: Pod "pod-secrets-397682bc-bb81-49dd-953f-96baa0fee0a3" satisfied condition "success or failure" Nov 27 21:50:41.222: INFO: Trying to get logs from node iruya-worker pod pod-secrets-397682bc-bb81-49dd-953f-96baa0fee0a3 container secret-env-test: STEP: delete the pod Nov 27 21:50:41.257: INFO: Waiting for pod pod-secrets-397682bc-bb81-49dd-953f-96baa0fee0a3 to disappear Nov 27 21:50:41.294: INFO: Pod pod-secrets-397682bc-bb81-49dd-953f-96baa0fee0a3 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:50:41.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7584" for this suite. Nov 27 21:50:47.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:50:47.519: INFO: namespace secrets-7584 deletion completed in 6.214263717s • [SLOW TEST:10.533 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:50:47.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Nov 27 21:50:47.627: INFO: Waiting up to 5m0s for pod "var-expansion-c9c96f97-f09c-4df9-b066-e775752da80f" in namespace "var-expansion-7490" to be "success or failure" Nov 27 21:50:47.649: INFO: Pod "var-expansion-c9c96f97-f09c-4df9-b066-e775752da80f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.62517ms Nov 27 21:50:49.678: INFO: Pod "var-expansion-c9c96f97-f09c-4df9-b066-e775752da80f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050695488s Nov 27 21:50:51.685: INFO: Pod "var-expansion-c9c96f97-f09c-4df9-b066-e775752da80f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057103742s STEP: Saw pod success Nov 27 21:50:51.685: INFO: Pod "var-expansion-c9c96f97-f09c-4df9-b066-e775752da80f" satisfied condition "success or failure" Nov 27 21:50:51.689: INFO: Trying to get logs from node iruya-worker pod var-expansion-c9c96f97-f09c-4df9-b066-e775752da80f container dapi-container: STEP: delete the pod Nov 27 21:50:51.723: INFO: Waiting for pod var-expansion-c9c96f97-f09c-4df9-b066-e775752da80f to disappear Nov 27 21:50:51.731: INFO: Pod var-expansion-c9c96f97-f09c-4df9-b066-e775752da80f no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:50:51.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7490" for this suite. Nov 27 21:50:57.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:50:57.903: INFO: namespace var-expansion-7490 deletion completed in 6.163660425s • [SLOW TEST:10.381 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:50:57.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:51:03.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4516" for this suite. Nov 27 21:51:09.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:51:09.727: INFO: namespace watch-4516 deletion completed in 6.291833042s • [SLOW TEST:11.822 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:51:09.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 27 21:51:13.877: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:51:13.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4587" for this suite. Nov 27 21:51:19.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:51:20.105: INFO: namespace container-runtime-4587 deletion completed in 6.178288682s • [SLOW TEST:10.376 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:51:20.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-2715 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2715 to expose endpoints map[] Nov 27 21:51:20.284: INFO: successfully validated that service endpoint-test2 in namespace services-2715 exposes endpoints map[] (5.984652ms elapsed) STEP: Creating pod pod1 in namespace services-2715 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2715 to expose endpoints map[pod1:[80]] Nov 27 21:51:24.385: INFO: successfully validated that service endpoint-test2 in namespace services-2715 exposes endpoints map[pod1:[80]] (4.093469118s elapsed) STEP: Creating pod pod2 in namespace services-2715 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2715 to expose endpoints map[pod1:[80] pod2:[80]] Nov 27 21:51:27.484: INFO: successfully validated that service endpoint-test2 in namespace services-2715 exposes endpoints map[pod1:[80] pod2:[80]] (3.091686108s elapsed) STEP: Deleting pod pod1 in namespace services-2715 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2715 to expose endpoints map[pod2:[80]] Nov 27 21:51:27.505: INFO: successfully validated that service endpoint-test2 in namespace services-2715 exposes endpoints map[pod2:[80]] (14.963973ms elapsed) STEP: Deleting pod pod2 in namespace services-2715 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2715 to expose endpoints map[] Nov 27 21:51:27.527: INFO: successfully validated that service endpoint-test2 in namespace services-2715 exposes endpoints map[] (16.743003ms elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:51:27.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2715" for this suite. Nov 27 21:51:49.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:51:50.020: INFO: namespace services-2715 deletion completed in 22.416322549s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:29.914 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:51:50.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Nov 27 21:51:50.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3484' Nov 27 21:51:54.375: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Nov 27 21:51:54.376: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Nov 27 21:51:54.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-3484' Nov 27 21:51:55.684: INFO: stderr: "" Nov 27 21:51:55.684: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:51:55.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3484" for this suite. Nov 27 21:52:17.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:52:17.937: INFO: namespace kubectl-3484 deletion completed in 22.167630338s • [SLOW TEST:27.915 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:52:17.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 27 21:52:18.036: INFO: Waiting up to 5m0s for pod "downwardapi-volume-516f683a-cf6e-436c-bfa7-646a055a5457" in namespace "projected-3040" to be "success or failure" Nov 27 21:52:18.052: INFO: Pod "downwardapi-volume-516f683a-cf6e-436c-bfa7-646a055a5457": Phase="Pending", Reason="", readiness=false. Elapsed: 15.8119ms Nov 27 21:52:20.059: INFO: Pod "downwardapi-volume-516f683a-cf6e-436c-bfa7-646a055a5457": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021966193s Nov 27 21:52:22.086: INFO: Pod "downwardapi-volume-516f683a-cf6e-436c-bfa7-646a055a5457": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049731199s STEP: Saw pod success Nov 27 21:52:22.090: INFO: Pod "downwardapi-volume-516f683a-cf6e-436c-bfa7-646a055a5457" satisfied condition "success or failure" Nov 27 21:52:22.095: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-516f683a-cf6e-436c-bfa7-646a055a5457 container client-container: STEP: delete the pod Nov 27 21:52:22.209: INFO: Waiting for pod downwardapi-volume-516f683a-cf6e-436c-bfa7-646a055a5457 to disappear Nov 27 21:52:22.232: INFO: Pod downwardapi-volume-516f683a-cf6e-436c-bfa7-646a055a5457 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:52:22.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3040" for this suite. Nov 27 21:52:28.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:52:28.434: INFO: namespace projected-3040 deletion completed in 6.192768837s • [SLOW TEST:10.492 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:52:28.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-q68g STEP: Creating a pod to test atomic-volume-subpath Nov 27 21:52:28.540: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-q68g" in namespace "subpath-3775" to be "success or failure" Nov 27 21:52:28.546: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Pending", Reason="", readiness=false. Elapsed: 5.642359ms Nov 27 21:52:30.554: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013155989s Nov 27 21:52:32.562: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Running", Reason="", readiness=true. Elapsed: 4.02084319s Nov 27 21:52:34.568: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Running", Reason="", readiness=true. Elapsed: 6.027043669s Nov 27 21:52:36.575: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Running", Reason="", readiness=true. Elapsed: 8.034010219s Nov 27 21:52:38.581: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Running", Reason="", readiness=true. Elapsed: 10.040264776s Nov 27 21:52:40.587: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Running", Reason="", readiness=true. Elapsed: 12.046074299s Nov 27 21:52:42.593: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Running", Reason="", readiness=true. Elapsed: 14.05250886s Nov 27 21:52:44.600: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Running", Reason="", readiness=true. Elapsed: 16.059080267s Nov 27 21:52:46.607: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Running", Reason="", readiness=true. Elapsed: 18.066306262s Nov 27 21:52:48.632: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Running", Reason="", readiness=true. Elapsed: 20.091216903s Nov 27 21:52:50.639: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Running", Reason="", readiness=true. Elapsed: 22.097951228s Nov 27 21:52:52.646: INFO: Pod "pod-subpath-test-projected-q68g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.105115857s STEP: Saw pod success Nov 27 21:52:52.646: INFO: Pod "pod-subpath-test-projected-q68g" satisfied condition "success or failure" Nov 27 21:52:52.650: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-q68g container test-container-subpath-projected-q68g: STEP: delete the pod Nov 27 21:52:52.679: INFO: Waiting for pod pod-subpath-test-projected-q68g to disappear Nov 27 21:52:52.823: INFO: Pod pod-subpath-test-projected-q68g no longer exists STEP: Deleting pod pod-subpath-test-projected-q68g Nov 27 21:52:52.823: INFO: Deleting pod "pod-subpath-test-projected-q68g" in namespace "subpath-3775" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:52:52.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3775" for this suite. Nov 27 21:52:59.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:52:59.158: INFO: namespace subpath-3775 deletion completed in 6.32236183s • [SLOW TEST:30.724 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:52:59.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Nov 27 21:53:03.810: INFO: Successfully updated pod "labelsupdatecceb547e-f1b3-4924-bcdb-6bb662c67d1a" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:53:05.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7100" for this suite. Nov 27 21:53:27.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:53:28.014: INFO: namespace downward-api-7100 deletion completed in 22.166028859s • [SLOW TEST:28.853 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:53:28.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2806/configmap-test-38d7b439-e32b-469e-b38e-05d8834d41c0 STEP: Creating a pod to test consume configMaps Nov 27 21:53:28.153: INFO: Waiting up to 5m0s for pod "pod-configmaps-2820e5c7-ff61-4b8a-a7ea-bbd944b64a14" in namespace "configmap-2806" to be "success or failure" Nov 27 21:53:28.163: INFO: Pod "pod-configmaps-2820e5c7-ff61-4b8a-a7ea-bbd944b64a14": Phase="Pending", Reason="", readiness=false. Elapsed: 10.073258ms Nov 27 21:53:30.169: INFO: Pod "pod-configmaps-2820e5c7-ff61-4b8a-a7ea-bbd944b64a14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015460385s Nov 27 21:53:32.176: INFO: Pod "pod-configmaps-2820e5c7-ff61-4b8a-a7ea-bbd944b64a14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022464831s STEP: Saw pod success Nov 27 21:53:32.176: INFO: Pod "pod-configmaps-2820e5c7-ff61-4b8a-a7ea-bbd944b64a14" satisfied condition "success or failure" Nov 27 21:53:32.181: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-2820e5c7-ff61-4b8a-a7ea-bbd944b64a14 container env-test: STEP: delete the pod Nov 27 21:53:32.245: INFO: Waiting for pod pod-configmaps-2820e5c7-ff61-4b8a-a7ea-bbd944b64a14 to disappear Nov 27 21:53:32.255: INFO: Pod pod-configmaps-2820e5c7-ff61-4b8a-a7ea-bbd944b64a14 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:53:32.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2806" for this suite. Nov 27 21:53:38.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:53:38.445: INFO: namespace configmap-2806 deletion completed in 6.181501094s • [SLOW TEST:10.427 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:53:38.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1127 21:54:18.584667 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 21:54:18.585: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:54:18.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3048" for this suite. Nov 27 21:54:26.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:54:26.777: INFO: namespace gc-3048 deletion completed in 8.183686274s • [SLOW TEST:48.331 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:54:26.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-251c5237-4982-45e0-b2df-8a6f9f4a5128 in namespace container-probe-8201 Nov 27 21:54:31.093: INFO: Started pod liveness-251c5237-4982-45e0-b2df-8a6f9f4a5128 in namespace container-probe-8201 STEP: checking the pod's current state and verifying that restartCount is present Nov 27 21:54:31.099: INFO: Initial restart count of pod liveness-251c5237-4982-45e0-b2df-8a6f9f4a5128 is 0 Nov 27 21:54:47.158: INFO: Restart count of pod container-probe-8201/liveness-251c5237-4982-45e0-b2df-8a6f9f4a5128 is now 1 (16.059032307s elapsed) Nov 27 21:55:07.227: INFO: Restart count of pod container-probe-8201/liveness-251c5237-4982-45e0-b2df-8a6f9f4a5128 is now 2 (36.128373682s elapsed) Nov 27 21:55:27.298: INFO: Restart count of pod container-probe-8201/liveness-251c5237-4982-45e0-b2df-8a6f9f4a5128 is now 3 (56.199200116s elapsed) Nov 27 21:55:47.383: INFO: Restart count of pod container-probe-8201/liveness-251c5237-4982-45e0-b2df-8a6f9f4a5128 is now 4 (1m16.283906368s elapsed) Nov 27 21:56:55.646: INFO: Restart count of pod container-probe-8201/liveness-251c5237-4982-45e0-b2df-8a6f9f4a5128 is now 5 (2m24.547175601s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:56:55.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8201" for this suite. Nov 27 21:57:01.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:57:01.864: INFO: namespace container-probe-8201 deletion completed in 6.181137931s • [SLOW TEST:155.085 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:57:01.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Nov 27 21:57:01.951: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6017,SelfLink:/api/v1/namespaces/watch-6017/configmaps/e2e-watch-test-label-changed,UID:28d94d16-5b7b-4fdb-b76c-bdfbc3cfd64c,ResourceVersion:11922343,Generation:0,CreationTimestamp:2020-11-27 21:57:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Nov 27 21:57:01.952: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6017,SelfLink:/api/v1/namespaces/watch-6017/configmaps/e2e-watch-test-label-changed,UID:28d94d16-5b7b-4fdb-b76c-bdfbc3cfd64c,ResourceVersion:11922344,Generation:0,CreationTimestamp:2020-11-27 21:57:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Nov 27 21:57:01.953: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6017,SelfLink:/api/v1/namespaces/watch-6017/configmaps/e2e-watch-test-label-changed,UID:28d94d16-5b7b-4fdb-b76c-bdfbc3cfd64c,ResourceVersion:11922345,Generation:0,CreationTimestamp:2020-11-27 21:57:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Nov 27 21:57:12.029: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6017,SelfLink:/api/v1/namespaces/watch-6017/configmaps/e2e-watch-test-label-changed,UID:28d94d16-5b7b-4fdb-b76c-bdfbc3cfd64c,ResourceVersion:11922366,Generation:0,CreationTimestamp:2020-11-27 21:57:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Nov 27 21:57:12.030: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6017,SelfLink:/api/v1/namespaces/watch-6017/configmaps/e2e-watch-test-label-changed,UID:28d94d16-5b7b-4fdb-b76c-bdfbc3cfd64c,ResourceVersion:11922367,Generation:0,CreationTimestamp:2020-11-27 21:57:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Nov 27 21:57:12.031: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6017,SelfLink:/api/v1/namespaces/watch-6017/configmaps/e2e-watch-test-label-changed,UID:28d94d16-5b7b-4fdb-b76c-bdfbc3cfd64c,ResourceVersion:11922368,Generation:0,CreationTimestamp:2020-11-27 21:57:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:57:12.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6017" for this suite. Nov 27 21:57:18.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:57:18.310: INFO: namespace watch-6017 deletion completed in 6.269851175s • [SLOW TEST:16.444 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:57:18.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-7xck STEP: Creating a pod to test atomic-volume-subpath Nov 27 21:57:18.426: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7xck" in namespace "subpath-3690" to be "success or failure" Nov 27 21:57:18.435: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Pending", Reason="", readiness=false. Elapsed: 8.941273ms Nov 27 21:57:20.443: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016328133s Nov 27 21:57:22.449: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Running", Reason="", readiness=true. Elapsed: 4.022729953s Nov 27 21:57:24.456: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Running", Reason="", readiness=true. Elapsed: 6.029495331s Nov 27 21:57:26.462: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Running", Reason="", readiness=true. Elapsed: 8.035973847s Nov 27 21:57:28.468: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Running", Reason="", readiness=true. Elapsed: 10.041176116s Nov 27 21:57:30.474: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Running", Reason="", readiness=true. Elapsed: 12.047128328s Nov 27 21:57:32.481: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Running", Reason="", readiness=true. Elapsed: 14.054251732s Nov 27 21:57:34.489: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Running", Reason="", readiness=true. Elapsed: 16.062141099s Nov 27 21:57:36.499: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Running", Reason="", readiness=true. Elapsed: 18.072742304s Nov 27 21:57:38.506: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Running", Reason="", readiness=true. Elapsed: 20.079382946s Nov 27 21:57:40.512: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Running", Reason="", readiness=true. Elapsed: 22.085609269s Nov 27 21:57:42.519: INFO: Pod "pod-subpath-test-secret-7xck": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.09229663s STEP: Saw pod success Nov 27 21:57:42.519: INFO: Pod "pod-subpath-test-secret-7xck" satisfied condition "success or failure" Nov 27 21:57:42.523: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-7xck container test-container-subpath-secret-7xck: STEP: delete the pod Nov 27 21:57:42.650: INFO: Waiting for pod pod-subpath-test-secret-7xck to disappear Nov 27 21:57:42.742: INFO: Pod pod-subpath-test-secret-7xck no longer exists STEP: Deleting pod pod-subpath-test-secret-7xck Nov 27 21:57:42.743: INFO: Deleting pod "pod-subpath-test-secret-7xck" in namespace "subpath-3690" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:57:42.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3690" for this suite. Nov 27 21:57:48.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:57:48.973: INFO: namespace subpath-3690 deletion completed in 6.185266988s • [SLOW TEST:30.662 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:57:48.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 27 21:57:49.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39e25acd-212b-4d5b-8efe-f68fd60a829e" in namespace "downward-api-6842" to be "success or failure" Nov 27 21:57:49.170: INFO: Pod "downwardapi-volume-39e25acd-212b-4d5b-8efe-f68fd60a829e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.709302ms Nov 27 21:57:51.176: INFO: Pod "downwardapi-volume-39e25acd-212b-4d5b-8efe-f68fd60a829e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034359049s Nov 27 21:57:53.184: INFO: Pod "downwardapi-volume-39e25acd-212b-4d5b-8efe-f68fd60a829e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041972435s STEP: Saw pod success Nov 27 21:57:53.184: INFO: Pod "downwardapi-volume-39e25acd-212b-4d5b-8efe-f68fd60a829e" satisfied condition "success or failure" Nov 27 21:57:53.190: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-39e25acd-212b-4d5b-8efe-f68fd60a829e container client-container: STEP: delete the pod Nov 27 21:57:53.214: INFO: Waiting for pod downwardapi-volume-39e25acd-212b-4d5b-8efe-f68fd60a829e to disappear Nov 27 21:57:53.218: INFO: Pod downwardapi-volume-39e25acd-212b-4d5b-8efe-f68fd60a829e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:57:53.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6842" for this suite. Nov 27 21:57:59.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:57:59.436: INFO: namespace downward-api-6842 deletion completed in 6.209750514s • [SLOW TEST:10.462 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:57:59.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 27 21:57:59.526: INFO: Pod name rollover-pod: Found 0 pods out of 1 Nov 27 21:58:04.534: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 27 21:58:04.534: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Nov 27 21:58:06.542: INFO: Creating deployment "test-rollover-deployment" Nov 27 21:58:06.556: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Nov 27 21:58:08.569: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Nov 27 21:58:08.582: INFO: Ensure that both replica sets have 1 created replica Nov 27 21:58:08.592: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Nov 27 21:58:08.618: INFO: Updating deployment test-rollover-deployment Nov 27 21:58:08.619: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Nov 27 21:58:10.655: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Nov 27 21:58:10.666: INFO: Make sure deployment "test-rollover-deployment" is complete Nov 27 21:58:10.678: INFO: all replica sets need to contain the pod-template-hash label Nov 27 21:58:10.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111088, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 27 21:58:12.695: INFO: all replica sets need to contain the pod-template-hash label Nov 27 21:58:12.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111092, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 27 21:58:14.696: INFO: all replica sets need to contain the pod-template-hash label Nov 27 21:58:14.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111092, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 27 21:58:16.696: INFO: all replica sets need to contain the pod-template-hash label Nov 27 21:58:16.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111092, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 27 21:58:18.695: INFO: all replica sets need to contain the pod-template-hash label Nov 27 21:58:18.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111092, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 27 21:58:20.696: INFO: all replica sets need to contain the pod-template-hash label Nov 27 21:58:20.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111092, loc:(*time.Location)(0x792fa60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742111086, loc:(*time.Location)(0x792fa60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 27 21:58:22.697: INFO: Nov 27 21:58:22.697: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Nov 27 21:58:22.712: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5365,SelfLink:/apis/apps/v1/namespaces/deployment-5365/deployments/test-rollover-deployment,UID:7ebde745-412e-4f63-8905-542f60a0a3cc,ResourceVersion:11922641,Generation:2,CreationTimestamp:2020-11-27 21:58:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-11-27 21:58:06 +0000 UTC 2020-11-27 21:58:06 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-11-27 21:58:22 +0000 UTC 2020-11-27 21:58:06 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Nov 27 21:58:22.719: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5365,SelfLink:/apis/apps/v1/namespaces/deployment-5365/replicasets/test-rollover-deployment-854595fc44,UID:089a2526-b486-48d4-b94d-ba1a9a85b30e,ResourceVersion:11922629,Generation:2,CreationTimestamp:2020-11-27 21:58:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7ebde745-412e-4f63-8905-542f60a0a3cc 0x4002332687 0x4002332688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Nov 27 21:58:22.719: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Nov 27 21:58:22.720: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5365,SelfLink:/apis/apps/v1/namespaces/deployment-5365/replicasets/test-rollover-controller,UID:091d85a6-e46a-47e5-936f-a89eafa97023,ResourceVersion:11922638,Generation:2,CreationTimestamp:2020-11-27 21:57:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7ebde745-412e-4f63-8905-542f60a0a3cc 0x400233259f 0x40023325b0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Nov 27 21:58:22.721: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5365,SelfLink:/apis/apps/v1/namespaces/deployment-5365/replicasets/test-rollover-deployment-9b8b997cf,UID:040377b8-c08d-48f9-a584-67cbe9998418,ResourceVersion:11922584,Generation:2,CreationTimestamp:2020-11-27 21:58:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 7ebde745-412e-4f63-8905-542f60a0a3cc 0x4002332770 0x4002332771}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Nov 27 21:58:22.728: INFO: Pod "test-rollover-deployment-854595fc44-8tgx6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-8tgx6,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5365,SelfLink:/api/v1/namespaces/deployment-5365/pods/test-rollover-deployment-854595fc44-8tgx6,UID:1b9aa5e9-81e5-4ab9-9f40-2b3fee8e4415,ResourceVersion:11922607,Generation:0,CreationTimestamp:2020-11-27 21:58:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 089a2526-b486-48d4-b94d-ba1a9a85b30e 0x4002333e57 0x4002333e58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8p956 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8p956,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-8p956 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4002333ee0} {node.kubernetes.io/unreachable Exists NoExecute 0x4002333f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:58:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:58:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:58:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 21:58:08 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.237,StartTime:2020-11-27 21:58:08 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-11-27 21:58:11 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://4ba45f88b1b3cee454016682fcc5ec5d2372c51de49c49d7449c667aa9b5dec4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:58:22.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5365" for this suite. Nov 27 21:58:30.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:58:31.064: INFO: namespace deployment-5365 deletion completed in 8.328787834s • [SLOW TEST:31.625 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:58:31.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 27 21:58:31.170: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab7f66ac-b5c0-47c2-8f8e-5aa1035f0c8f" in namespace "projected-6215" to be "success or failure" Nov 27 21:58:31.211: INFO: Pod "downwardapi-volume-ab7f66ac-b5c0-47c2-8f8e-5aa1035f0c8f": Phase="Pending", Reason="", readiness=false. Elapsed: 41.365789ms Nov 27 21:58:33.219: INFO: Pod "downwardapi-volume-ab7f66ac-b5c0-47c2-8f8e-5aa1035f0c8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04888936s Nov 27 21:58:35.226: INFO: Pod "downwardapi-volume-ab7f66ac-b5c0-47c2-8f8e-5aa1035f0c8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056229418s STEP: Saw pod success Nov 27 21:58:35.226: INFO: Pod "downwardapi-volume-ab7f66ac-b5c0-47c2-8f8e-5aa1035f0c8f" satisfied condition "success or failure" Nov 27 21:58:35.231: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ab7f66ac-b5c0-47c2-8f8e-5aa1035f0c8f container client-container: STEP: delete the pod Nov 27 21:58:35.265: INFO: Waiting for pod downwardapi-volume-ab7f66ac-b5c0-47c2-8f8e-5aa1035f0c8f to disappear Nov 27 21:58:35.270: INFO: Pod downwardapi-volume-ab7f66ac-b5c0-47c2-8f8e-5aa1035f0c8f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:58:35.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6215" for this suite. Nov 27 21:58:41.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:58:41.459: INFO: namespace projected-6215 deletion completed in 6.180815971s • [SLOW TEST:10.391 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:58:41.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Nov 27 21:58:41.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2103' Nov 27 21:58:43.315: INFO: stderr: "" Nov 27 21:58:43.316: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Nov 27 21:58:44.325: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:58:44.325: INFO: Found 0 / 1 Nov 27 21:58:45.325: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:58:45.325: INFO: Found 0 / 1 Nov 27 21:58:46.324: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:58:46.324: INFO: Found 0 / 1 Nov 27 21:58:47.325: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:58:47.325: INFO: Found 1 / 1 Nov 27 21:58:47.325: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 27 21:58:47.333: INFO: Selector matched 1 pods for map[app:redis] Nov 27 21:58:47.333: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Nov 27 21:58:47.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wggdh redis-master --namespace=kubectl-2103' Nov 27 21:58:48.640: INFO: stderr: "" Nov 27 21:58:48.640: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Nov 21:58:45.921 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Nov 21:58:45.921 # Server started, Redis version 3.2.12\n1:M 27 Nov 21:58:45.921 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Nov 21:58:45.921 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Nov 27 21:58:48.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wggdh redis-master --namespace=kubectl-2103 --tail=1' Nov 27 21:58:49.951: INFO: stderr: "" Nov 27 21:58:49.951: INFO: stdout: "1:M 27 Nov 21:58:45.921 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Nov 27 21:58:49.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wggdh redis-master --namespace=kubectl-2103 --limit-bytes=1' Nov 27 21:58:51.228: INFO: stderr: "" Nov 27 21:58:51.228: INFO: stdout: " " STEP: exposing timestamps Nov 27 21:58:51.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wggdh redis-master --namespace=kubectl-2103 --tail=1 --timestamps' Nov 27 21:58:52.552: INFO: stderr: "" Nov 27 21:58:52.553: INFO: stdout: "2020-11-27T21:58:45.921365098Z 1:M 27 Nov 21:58:45.921 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Nov 27 21:58:55.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wggdh redis-master --namespace=kubectl-2103 --since=1s' Nov 27 21:58:56.345: INFO: stderr: "" Nov 27 21:58:56.346: INFO: stdout: "" Nov 27 21:58:56.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wggdh redis-master --namespace=kubectl-2103 --since=24h' Nov 27 21:58:57.654: INFO: stderr: "" Nov 27 21:58:57.654: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 27 Nov 21:58:45.921 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 27 Nov 21:58:45.921 # Server started, Redis version 3.2.12\n1:M 27 Nov 21:58:45.921 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 27 Nov 21:58:45.921 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Nov 27 21:58:57.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2103' Nov 27 21:58:58.921: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 27 21:58:58.921: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Nov 27 21:58:58.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2103' Nov 27 21:59:00.233: INFO: stderr: "No resources found.\n" Nov 27 21:59:00.233: INFO: stdout: "" Nov 27 21:59:00.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2103 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 27 21:59:01.499: INFO: stderr: "" Nov 27 21:59:01.499: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:59:01.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2103" for this suite. Nov 27 21:59:07.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:59:07.712: INFO: namespace kubectl-2103 deletion completed in 6.204157596s • [SLOW TEST:26.252 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:59:07.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1127 21:59:17.852823 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 27 21:59:17.853: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:59:17.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5515" for this suite. Nov 27 21:59:23.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:59:24.047: INFO: namespace gc-5515 deletion completed in 6.184844332s • [SLOW TEST:16.334 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:59:24.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-0a03d0e6-e41d-4a2b-b705-acd8d752ed5e STEP: Creating a pod to test consume configMaps Nov 27 21:59:24.191: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8fe2fe6d-91c5-4b60-a606-e87bc4f904f4" in namespace "projected-4407" to be "success or failure" Nov 27 21:59:24.199: INFO: Pod "pod-projected-configmaps-8fe2fe6d-91c5-4b60-a606-e87bc4f904f4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.975581ms Nov 27 21:59:26.260: INFO: Pod "pod-projected-configmaps-8fe2fe6d-91c5-4b60-a606-e87bc4f904f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068844232s Nov 27 21:59:28.266: INFO: Pod "pod-projected-configmaps-8fe2fe6d-91c5-4b60-a606-e87bc4f904f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075152228s STEP: Saw pod success Nov 27 21:59:28.266: INFO: Pod "pod-projected-configmaps-8fe2fe6d-91c5-4b60-a606-e87bc4f904f4" satisfied condition "success or failure" Nov 27 21:59:28.270: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-8fe2fe6d-91c5-4b60-a606-e87bc4f904f4 container projected-configmap-volume-test: STEP: delete the pod Nov 27 21:59:28.333: INFO: Waiting for pod pod-projected-configmaps-8fe2fe6d-91c5-4b60-a606-e87bc4f904f4 to disappear Nov 27 21:59:28.354: INFO: Pod pod-projected-configmaps-8fe2fe6d-91c5-4b60-a606-e87bc4f904f4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:59:28.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4407" for this suite. Nov 27 21:59:34.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:59:34.530: INFO: namespace projected-4407 deletion completed in 6.168510629s • [SLOW TEST:10.483 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:59:34.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Nov 27 21:59:34.616: INFO: Waiting up to 5m0s for pod "pod-057409b9-a1df-432d-95ff-b57d6a2ba8dd" in namespace "emptydir-6716" to be "success or failure" Nov 27 21:59:34.624: INFO: Pod "pod-057409b9-a1df-432d-95ff-b57d6a2ba8dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064181ms Nov 27 21:59:36.630: INFO: Pod "pod-057409b9-a1df-432d-95ff-b57d6a2ba8dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014638375s Nov 27 21:59:38.638: INFO: Pod "pod-057409b9-a1df-432d-95ff-b57d6a2ba8dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021738459s STEP: Saw pod success Nov 27 21:59:38.638: INFO: Pod "pod-057409b9-a1df-432d-95ff-b57d6a2ba8dd" satisfied condition "success or failure" Nov 27 21:59:38.648: INFO: Trying to get logs from node iruya-worker2 pod pod-057409b9-a1df-432d-95ff-b57d6a2ba8dd container test-container: STEP: delete the pod Nov 27 21:59:38.678: INFO: Waiting for pod pod-057409b9-a1df-432d-95ff-b57d6a2ba8dd to disappear Nov 27 21:59:38.798: INFO: Pod pod-057409b9-a1df-432d-95ff-b57d6a2ba8dd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:59:38.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6716" for this suite. Nov 27 21:59:45.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 21:59:45.309: INFO: namespace emptydir-6716 deletion completed in 6.246451004s • [SLOW TEST:10.776 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 21:59:45.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 21:59:45.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1913" for this suite. Nov 27 22:00:07.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:00:07.677: INFO: namespace kubelet-test-1913 deletion completed in 22.18977151s • [SLOW TEST:22.367 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:00:07.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-eb6d8641-006b-4df8-a2e7-9dc66a0752e7 STEP: Creating a pod to test consume secrets Nov 27 22:00:07.801: INFO: Waiting up to 5m0s for pod "pod-secrets-c5010503-024d-49ea-abd8-0459fdb3dfad" in namespace "secrets-3091" to be "success or failure" Nov 27 22:00:07.811: INFO: Pod "pod-secrets-c5010503-024d-49ea-abd8-0459fdb3dfad": Phase="Pending", Reason="", readiness=false. Elapsed: 9.630313ms Nov 27 22:00:09.818: INFO: Pod "pod-secrets-c5010503-024d-49ea-abd8-0459fdb3dfad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01624662s Nov 27 22:00:11.825: INFO: Pod "pod-secrets-c5010503-024d-49ea-abd8-0459fdb3dfad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023312118s STEP: Saw pod success Nov 27 22:00:11.825: INFO: Pod "pod-secrets-c5010503-024d-49ea-abd8-0459fdb3dfad" satisfied condition "success or failure" Nov 27 22:00:11.835: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c5010503-024d-49ea-abd8-0459fdb3dfad container secret-volume-test: STEP: delete the pod Nov 27 22:00:11.860: INFO: Waiting for pod pod-secrets-c5010503-024d-49ea-abd8-0459fdb3dfad to disappear Nov 27 22:00:11.887: INFO: Pod pod-secrets-c5010503-024d-49ea-abd8-0459fdb3dfad no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:00:11.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3091" for this suite. Nov 27 22:00:17.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:00:18.110: INFO: namespace secrets-3091 deletion completed in 6.216296151s • [SLOW TEST:10.430 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:00:18.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 27 22:00:18.193: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Nov 27 22:00:23.201: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 27 22:00:23.201: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Nov 27 22:00:27.274: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7416,SelfLink:/apis/apps/v1/namespaces/deployment-7416/deployments/test-cleanup-deployment,UID:dd32d6ce-b02f-48df-b45c-5170de6c05a4,ResourceVersion:11923166,Generation:1,CreationTimestamp:2020-11-27 22:00:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-11-27 22:00:23 +0000 UTC 2020-11-27 22:00:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-11-27 22:00:26 +0000 UTC 2020-11-27 22:00:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Nov 27 22:00:27.282: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7416,SelfLink:/apis/apps/v1/namespaces/deployment-7416/replicasets/test-cleanup-deployment-55bbcbc84c,UID:55cefc1f-2c91-4eea-ade1-46b70d7d7051,ResourceVersion:11923155,Generation:1,CreationTimestamp:2020-11-27 22:00:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment dd32d6ce-b02f-48df-b45c-5170de6c05a4 0x40031407d7 0x40031407d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Nov 27 22:00:27.289: INFO: Pod "test-cleanup-deployment-55bbcbc84c-qcflg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-qcflg,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7416,SelfLink:/api/v1/namespaces/deployment-7416/pods/test-cleanup-deployment-55bbcbc84c-qcflg,UID:7486638d-2743-41fb-8dc6-5307b4c21d33,ResourceVersion:11923154,Generation:0,CreationTimestamp:2020-11-27 22:00:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 55cefc1f-2c91-4eea-ade1-46b70d7d7051 0x4003140da7 0x4003140da8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-trkmj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-trkmj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-trkmj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x4003140e20} {node.kubernetes.io/unreachable Exists NoExecute 0x4003140e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 22:00:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 22:00:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 22:00:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 22:00:23 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.243,StartTime:2020-11-27 22:00:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-11-27 22:00:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://ba3422735b4b8482c381c75ed3f149570c0f15decd1d8899418faa0f685b75fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:00:27.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7416" for this suite. Nov 27 22:00:33.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:00:33.510: INFO: namespace deployment-7416 deletion completed in 6.212749788s • [SLOW TEST:15.398 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:00:33.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-c0dfc2fb-ea0c-4453-9d0e-830fd90a7b1b STEP: Creating a pod to test consume configMaps Nov 27 22:00:33.633: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6ee01204-9c8a-47be-9974-cfc063489593" in namespace "projected-7415" to be "success or failure" Nov 27 22:00:33.637: INFO: Pod "pod-projected-configmaps-6ee01204-9c8a-47be-9974-cfc063489593": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037672ms Nov 27 22:00:35.644: INFO: Pod "pod-projected-configmaps-6ee01204-9c8a-47be-9974-cfc063489593": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011125658s Nov 27 22:00:37.659: INFO: Pod "pod-projected-configmaps-6ee01204-9c8a-47be-9974-cfc063489593": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025666825s STEP: Saw pod success Nov 27 22:00:37.659: INFO: Pod "pod-projected-configmaps-6ee01204-9c8a-47be-9974-cfc063489593" satisfied condition "success or failure" Nov 27 22:00:37.664: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-6ee01204-9c8a-47be-9974-cfc063489593 container projected-configmap-volume-test: STEP: delete the pod Nov 27 22:00:37.730: INFO: Waiting for pod pod-projected-configmaps-6ee01204-9c8a-47be-9974-cfc063489593 to disappear Nov 27 22:00:37.745: INFO: Pod pod-projected-configmaps-6ee01204-9c8a-47be-9974-cfc063489593 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:00:37.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7415" for this suite. Nov 27 22:00:43.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:00:43.982: INFO: namespace projected-7415 deletion completed in 6.229240225s • [SLOW TEST:10.470 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:00:43.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-379bb7d1-54b7-4192-805f-2537542b9a1a STEP: Creating a pod to test consume configMaps Nov 27 22:00:44.076: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-da9d9497-e5b5-4022-938d-b73b39f636bb" in namespace "projected-7240" to be "success or failure" Nov 27 22:00:44.096: INFO: Pod "pod-projected-configmaps-da9d9497-e5b5-4022-938d-b73b39f636bb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.781842ms Nov 27 22:00:46.189: INFO: Pod "pod-projected-configmaps-da9d9497-e5b5-4022-938d-b73b39f636bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112153195s Nov 27 22:00:48.196: INFO: Pod "pod-projected-configmaps-da9d9497-e5b5-4022-938d-b73b39f636bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119099095s STEP: Saw pod success Nov 27 22:00:48.196: INFO: Pod "pod-projected-configmaps-da9d9497-e5b5-4022-938d-b73b39f636bb" satisfied condition "success or failure" Nov 27 22:00:48.201: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-da9d9497-e5b5-4022-938d-b73b39f636bb container projected-configmap-volume-test: STEP: delete the pod Nov 27 22:00:48.262: INFO: Waiting for pod pod-projected-configmaps-da9d9497-e5b5-4022-938d-b73b39f636bb to disappear Nov 27 22:00:48.277: INFO: Pod pod-projected-configmaps-da9d9497-e5b5-4022-938d-b73b39f636bb no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:00:48.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7240" for this suite. Nov 27 22:00:54.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:00:54.458: INFO: namespace projected-7240 deletion completed in 6.170886354s • [SLOW TEST:10.475 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:00:54.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1526 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Nov 27 22:00:54.609: INFO: Found 0 stateful pods, waiting for 3 Nov 27 22:01:04.618: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 27 22:01:04.619: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 27 22:01:04.619: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Nov 27 22:01:04.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1526 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 27 22:01:06.160: INFO: stderr: "I1127 22:01:05.974005 2341 log.go:172] (0x4000896630) (0x40008d6a00) Create stream\nI1127 22:01:05.979976 2341 log.go:172] (0x4000896630) (0x40008d6a00) Stream added, broadcasting: 1\nI1127 22:01:05.995985 2341 log.go:172] (0x4000896630) Reply frame received for 1\nI1127 22:01:05.997086 2341 log.go:172] (0x4000896630) (0x40008d6000) Create stream\nI1127 22:01:05.997231 2341 log.go:172] (0x4000896630) (0x40008d6000) Stream added, broadcasting: 3\nI1127 22:01:05.999225 2341 log.go:172] (0x4000896630) Reply frame received for 3\nI1127 22:01:05.999516 2341 log.go:172] (0x4000896630) (0x400065c320) Create stream\nI1127 22:01:05.999586 2341 log.go:172] (0x4000896630) (0x400065c320) Stream added, broadcasting: 5\nI1127 22:01:06.000773 2341 log.go:172] (0x4000896630) Reply frame received for 5\nI1127 22:01:06.088945 2341 log.go:172] (0x4000896630) Data frame received for 5\nI1127 22:01:06.089338 2341 log.go:172] (0x400065c320) (5) Data frame handling\nI1127 22:01:06.090011 2341 log.go:172] (0x400065c320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1127 22:01:06.137477 2341 log.go:172] (0x4000896630) Data frame received for 3\nI1127 22:01:06.137662 2341 log.go:172] (0x40008d6000) (3) Data frame handling\nI1127 22:01:06.137778 2341 log.go:172] (0x40008d6000) (3) Data frame sent\nI1127 22:01:06.137889 2341 log.go:172] (0x4000896630) Data frame received for 3\nI1127 22:01:06.137981 2341 log.go:172] (0x40008d6000) (3) Data frame handling\nI1127 22:01:06.138277 2341 log.go:172] (0x4000896630) Data frame received for 5\nI1127 22:01:06.138448 2341 log.go:172] (0x400065c320) (5) Data frame handling\nI1127 22:01:06.139791 2341 log.go:172] (0x4000896630) Data frame received for 1\nI1127 22:01:06.139908 2341 log.go:172] (0x40008d6a00) (1) Data frame handling\nI1127 22:01:06.140025 2341 log.go:172] (0x40008d6a00) (1) Data frame sent\nI1127 22:01:06.140689 2341 log.go:172] (0x4000896630) (0x40008d6a00) Stream removed, broadcasting: 1\nI1127 22:01:06.143931 2341 log.go:172] (0x4000896630) Go away received\nI1127 22:01:06.148426 2341 log.go:172] (0x4000896630) (0x40008d6a00) Stream removed, broadcasting: 1\nI1127 22:01:06.149210 2341 log.go:172] (0x4000896630) (0x40008d6000) Stream removed, broadcasting: 3\nI1127 22:01:06.149500 2341 log.go:172] (0x4000896630) (0x400065c320) Stream removed, broadcasting: 5\n" Nov 27 22:01:06.161: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 27 22:01:06.161: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Nov 27 22:01:16.227: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Nov 27 22:01:26.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1526 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 22:01:27.759: INFO: stderr: "I1127 22:01:27.621322 2365 log.go:172] (0x40006a6580) (0x400064cc80) Create stream\nI1127 22:01:27.627158 2365 log.go:172] (0x40006a6580) (0x400064cc80) Stream added, broadcasting: 1\nI1127 22:01:27.646640 2365 log.go:172] (0x40006a6580) Reply frame received for 1\nI1127 22:01:27.647748 2365 log.go:172] (0x40006a6580) (0x40009180a0) Create stream\nI1127 22:01:27.647866 2365 log.go:172] (0x40006a6580) (0x40009180a0) Stream added, broadcasting: 3\nI1127 22:01:27.649649 2365 log.go:172] (0x40006a6580) Reply frame received for 3\nI1127 22:01:27.649885 2365 log.go:172] (0x40006a6580) (0x4000858000) Create stream\nI1127 22:01:27.649946 2365 log.go:172] (0x40006a6580) (0x4000858000) Stream added, broadcasting: 5\nI1127 22:01:27.651132 2365 log.go:172] (0x40006a6580) Reply frame received for 5\nI1127 22:01:27.738874 2365 log.go:172] (0x40006a6580) Data frame received for 3\nI1127 22:01:27.739151 2365 log.go:172] (0x40006a6580) Data frame received for 1\nI1127 22:01:27.739410 2365 log.go:172] (0x400064cc80) (1) Data frame handling\nI1127 22:01:27.739692 2365 log.go:172] (0x40006a6580) Data frame received for 5\nI1127 22:01:27.739839 2365 log.go:172] (0x4000858000) (5) Data frame handling\nI1127 22:01:27.740357 2365 log.go:172] (0x40009180a0) (3) Data frame handling\nI1127 22:01:27.740669 2365 log.go:172] (0x4000858000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1127 22:01:27.741332 2365 log.go:172] (0x400064cc80) (1) Data frame sent\nI1127 22:01:27.741634 2365 log.go:172] (0x40006a6580) Data frame received for 5\nI1127 22:01:27.741723 2365 log.go:172] (0x40009180a0) (3) Data frame sent\nI1127 22:01:27.741805 2365 log.go:172] (0x40006a6580) Data frame received for 3\nI1127 22:01:27.741864 2365 log.go:172] (0x40009180a0) (3) Data frame handling\nI1127 22:01:27.741950 2365 log.go:172] (0x4000858000) (5) Data frame handling\nI1127 22:01:27.743918 2365 log.go:172] (0x40006a6580) (0x400064cc80) Stream removed, broadcasting: 1\nI1127 22:01:27.745687 2365 log.go:172] (0x40006a6580) Go away received\nI1127 22:01:27.749537 2365 log.go:172] (0x40006a6580) (0x400064cc80) Stream removed, broadcasting: 1\nI1127 22:01:27.749727 2365 log.go:172] (0x40006a6580) (0x40009180a0) Stream removed, broadcasting: 3\nI1127 22:01:27.749890 2365 log.go:172] (0x40006a6580) (0x4000858000) Stream removed, broadcasting: 5\n" Nov 27 22:01:27.760: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Nov 27 22:01:27.761: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Nov 27 22:01:37.795: INFO: Waiting for StatefulSet statefulset-1526/ss2 to complete update Nov 27 22:01:37.796: INFO: Waiting for Pod statefulset-1526/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Nov 27 22:01:37.796: INFO: Waiting for Pod statefulset-1526/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Nov 27 22:01:47.810: INFO: Waiting for StatefulSet statefulset-1526/ss2 to complete update Nov 27 22:01:47.811: INFO: Waiting for Pod statefulset-1526/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Nov 27 22:01:57.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1526 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 27 22:02:02.150: INFO: stderr: "I1127 22:02:01.903733 2389 log.go:172] (0x4000ab84d0) (0x40006668c0) Create stream\nI1127 22:02:01.907248 2389 log.go:172] (0x4000ab84d0) (0x40006668c0) Stream added, broadcasting: 1\nI1127 22:02:01.921861 2389 log.go:172] (0x4000ab84d0) Reply frame received for 1\nI1127 22:02:01.923155 2389 log.go:172] (0x4000ab84d0) (0x400097c000) Create stream\nI1127 22:02:01.923298 2389 log.go:172] (0x4000ab84d0) (0x400097c000) Stream added, broadcasting: 3\nI1127 22:02:01.925753 2389 log.go:172] (0x4000ab84d0) Reply frame received for 3\nI1127 22:02:01.926278 2389 log.go:172] (0x4000ab84d0) (0x4000666960) Create stream\nI1127 22:02:01.926376 2389 log.go:172] (0x4000ab84d0) (0x4000666960) Stream added, broadcasting: 5\nI1127 22:02:01.928305 2389 log.go:172] (0x4000ab84d0) Reply frame received for 5\nI1127 22:02:02.050799 2389 log.go:172] (0x4000ab84d0) Data frame received for 5\nI1127 22:02:02.051086 2389 log.go:172] (0x4000666960) (5) Data frame handling\nI1127 22:02:02.051715 2389 log.go:172] (0x4000666960) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1127 22:02:02.127724 2389 log.go:172] (0x4000ab84d0) Data frame received for 5\nI1127 22:02:02.127886 2389 log.go:172] (0x4000666960) (5) Data frame handling\nI1127 22:02:02.128023 2389 log.go:172] (0x4000ab84d0) Data frame received for 3\nI1127 22:02:02.128175 2389 log.go:172] (0x400097c000) (3) Data frame handling\nI1127 22:02:02.128306 2389 log.go:172] (0x400097c000) (3) Data frame sent\nI1127 22:02:02.128397 2389 log.go:172] (0x4000ab84d0) Data frame received for 3\nI1127 22:02:02.128487 2389 log.go:172] (0x400097c000) (3) Data frame handling\nI1127 22:02:02.129646 2389 log.go:172] (0x4000ab84d0) Data frame received for 1\nI1127 22:02:02.129746 2389 log.go:172] (0x40006668c0) (1) Data frame handling\nI1127 22:02:02.129839 2389 log.go:172] (0x40006668c0) (1) Data frame sent\nI1127 22:02:02.131859 2389 log.go:172] (0x4000ab84d0) (0x40006668c0) Stream removed, broadcasting: 1\nI1127 22:02:02.133394 2389 log.go:172] (0x4000ab84d0) Go away received\nI1127 22:02:02.136711 2389 log.go:172] (0x4000ab84d0) (0x40006668c0) Stream removed, broadcasting: 1\nI1127 22:02:02.137400 2389 log.go:172] (0x4000ab84d0) (0x400097c000) Stream removed, broadcasting: 3\nI1127 22:02:02.137702 2389 log.go:172] (0x4000ab84d0) (0x4000666960) Stream removed, broadcasting: 5\n" Nov 27 22:02:02.150: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 27 22:02:02.150: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 27 22:02:12.221: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Nov 27 22:02:22.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1526 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 27 22:02:23.782: INFO: stderr: "I1127 22:02:23.671366 2422 log.go:172] (0x4000888630) (0x4000842a00) Create stream\nI1127 22:02:23.675994 2422 log.go:172] (0x4000888630) (0x4000842a00) Stream added, broadcasting: 1\nI1127 22:02:23.692966 2422 log.go:172] (0x4000888630) Reply frame received for 1\nI1127 22:02:23.693606 2422 log.go:172] (0x4000888630) (0x4000842000) Create stream\nI1127 22:02:23.693670 2422 log.go:172] (0x4000888630) (0x4000842000) Stream added, broadcasting: 3\nI1127 22:02:23.701383 2422 log.go:172] (0x4000888630) Reply frame received for 3\nI1127 22:02:23.701800 2422 log.go:172] (0x4000888630) (0x400059a320) Create stream\nI1127 22:02:23.701888 2422 log.go:172] (0x4000888630) (0x400059a320) Stream added, broadcasting: 5\nI1127 22:02:23.703361 2422 log.go:172] (0x4000888630) Reply frame received for 5\nI1127 22:02:23.759359 2422 log.go:172] (0x4000888630) Data frame received for 5\nI1127 22:02:23.759563 2422 log.go:172] (0x4000888630) Data frame received for 1\nI1127 22:02:23.759919 2422 log.go:172] (0x4000888630) Data frame received for 3\nI1127 22:02:23.760524 2422 log.go:172] (0x4000842000) (3) Data frame handling\nI1127 22:02:23.761405 2422 log.go:172] (0x4000842a00) (1) Data frame handling\nI1127 22:02:23.761617 2422 log.go:172] (0x400059a320) (5) Data frame handling\nI1127 22:02:23.762089 2422 log.go:172] (0x4000842000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1127 22:02:23.762914 2422 log.go:172] (0x400059a320) (5) Data frame sent\nI1127 22:02:23.763124 2422 log.go:172] (0x4000888630) Data frame received for 3\nI1127 22:02:23.763235 2422 log.go:172] (0x4000842000) (3) Data frame handling\nI1127 22:02:23.763336 2422 log.go:172] (0x4000888630) Data frame received for 5\nI1127 22:02:23.763428 2422 log.go:172] (0x400059a320) (5) Data frame handling\nI1127 22:02:23.763911 2422 log.go:172] (0x4000842a00) (1) Data frame sent\nI1127 22:02:23.765939 2422 log.go:172] (0x4000888630) (0x4000842a00) Stream removed, broadcasting: 1\nI1127 22:02:23.767257 2422 log.go:172] (0x4000888630) Go away received\nI1127 22:02:23.770963 2422 log.go:172] (0x4000888630) (0x4000842a00) Stream removed, broadcasting: 1\nI1127 22:02:23.771257 2422 log.go:172] (0x4000888630) (0x4000842000) Stream removed, broadcasting: 3\nI1127 22:02:23.771485 2422 log.go:172] (0x4000888630) (0x400059a320) Stream removed, broadcasting: 5\n" Nov 27 22:02:23.783: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Nov 27 22:02:23.783: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Nov 27 22:02:33.826: INFO: Waiting for StatefulSet statefulset-1526/ss2 to complete update Nov 27 22:02:33.826: INFO: Waiting for Pod statefulset-1526/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Nov 27 22:02:33.826: INFO: Waiting for Pod statefulset-1526/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Nov 27 22:02:33.826: INFO: Waiting for Pod statefulset-1526/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Nov 27 22:02:43.909: INFO: Waiting for StatefulSet statefulset-1526/ss2 to complete update Nov 27 22:02:43.909: INFO: Waiting for Pod statefulset-1526/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Nov 27 22:02:43.909: INFO: Waiting for Pod statefulset-1526/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Nov 27 22:02:53.836: INFO: Waiting for StatefulSet statefulset-1526/ss2 to complete update Nov 27 22:02:53.836: INFO: Waiting for Pod statefulset-1526/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Nov 27 22:03:03.843: INFO: Deleting all statefulset in ns statefulset-1526 Nov 27 22:03:03.847: INFO: Scaling statefulset ss2 to 0 Nov 27 22:03:43.879: INFO: Waiting for statefulset status.replicas updated to 0 Nov 27 22:03:43.884: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:03:43.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1526" for this suite. Nov 27 22:03:49.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:03:50.144: INFO: namespace statefulset-1526 deletion completed in 6.216467575s • [SLOW TEST:175.685 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:03:50.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:03:54.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-900" for this suite. Nov 27 22:04:00.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:04:00.577: INFO: namespace emptydir-wrapper-900 deletion completed in 6.222664653s • [SLOW TEST:10.431 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:04:00.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-631 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 27 22:04:00.621: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Nov 27 22:04:26.793: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.251:8080/dial?request=hostName&protocol=udp&host=10.244.1.76&port=8081&tries=1'] Namespace:pod-network-test-631 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 22:04:26.793: INFO: >>> kubeConfig: /root/.kube/config I1127 22:04:26.862996 7 log.go:172] (0x4000e24580) (0x4002e17cc0) Create stream I1127 22:04:26.863163 7 log.go:172] (0x4000e24580) (0x4002e17cc0) Stream added, broadcasting: 1 I1127 22:04:26.866791 7 log.go:172] (0x4000e24580) Reply frame received for 1 I1127 22:04:26.866953 7 log.go:172] (0x4000e24580) (0x4002e17d60) Create stream I1127 22:04:26.867027 7 log.go:172] (0x4000e24580) (0x4002e17d60) Stream added, broadcasting: 3 I1127 22:04:26.868422 7 log.go:172] (0x4000e24580) Reply frame received for 3 I1127 22:04:26.868566 7 log.go:172] (0x4000e24580) (0x400181eaa0) Create stream I1127 22:04:26.868653 7 log.go:172] (0x4000e24580) (0x400181eaa0) Stream added, broadcasting: 5 I1127 22:04:26.870284 7 log.go:172] (0x4000e24580) Reply frame received for 5 I1127 22:04:26.981449 7 log.go:172] (0x4000e24580) Data frame received for 5 I1127 22:04:26.981918 7 log.go:172] (0x400181eaa0) (5) Data frame handling I1127 22:04:26.982551 7 log.go:172] (0x4000e24580) Data frame received for 1 I1127 22:04:26.982722 7 log.go:172] (0x4002e17cc0) (1) Data frame handling I1127 22:04:26.982912 7 log.go:172] (0x4002e17cc0) (1) Data frame sent I1127 22:04:26.983071 7 log.go:172] (0x4000e24580) (0x4002e17cc0) Stream removed, broadcasting: 1 I1127 22:04:26.983208 7 log.go:172] (0x4000e24580) Data frame received for 3 I1127 22:04:26.983407 7 log.go:172] (0x4002e17d60) (3) Data frame handling I1127 22:04:26.983879 7 log.go:172] (0x4002e17d60) (3) Data frame sent I1127 22:04:26.984370 7 log.go:172] (0x4000e24580) Data frame received for 3 I1127 22:04:26.984644 7 log.go:172] (0x4002e17d60) (3) Data frame handling I1127 22:04:26.984862 7 log.go:172] (0x4000e24580) Go away received I1127 22:04:26.985415 7 log.go:172] (0x4000e24580) (0x4002e17cc0) Stream removed, broadcasting: 1 I1127 22:04:26.985574 7 log.go:172] (0x4000e24580) (0x4002e17d60) Stream removed, broadcasting: 3 I1127 22:04:26.985691 7 log.go:172] (0x4000e24580) (0x400181eaa0) Stream removed, broadcasting: 5 Nov 27 22:04:26.986: INFO: Waiting for endpoints: map[] Nov 27 22:04:26.994: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.251:8080/dial?request=hostName&protocol=udp&host=10.244.2.250&port=8081&tries=1'] Namespace:pod-network-test-631 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 27 22:04:26.995: INFO: >>> kubeConfig: /root/.kube/config I1127 22:04:27.053530 7 log.go:172] (0x4000e24210) (0x4002432280) Create stream I1127 22:04:27.053712 7 log.go:172] (0x4000e24210) (0x4002432280) Stream added, broadcasting: 1 I1127 22:04:27.057994 7 log.go:172] (0x4000e24210) Reply frame received for 1 I1127 22:04:27.058206 7 log.go:172] (0x4000e24210) (0x4002e4e000) Create stream I1127 22:04:27.058313 7 log.go:172] (0x4000e24210) (0x4002e4e000) Stream added, broadcasting: 3 I1127 22:04:27.060247 7 log.go:172] (0x4000e24210) Reply frame received for 3 I1127 22:04:27.060568 7 log.go:172] (0x4000e24210) (0x400259e000) Create stream I1127 22:04:27.060701 7 log.go:172] (0x4000e24210) (0x400259e000) Stream added, broadcasting: 5 I1127 22:04:27.062890 7 log.go:172] (0x4000e24210) Reply frame received for 5 I1127 22:04:27.129062 7 log.go:172] (0x4000e24210) Data frame received for 3 I1127 22:04:27.129407 7 log.go:172] (0x4002e4e000) (3) Data frame handling I1127 22:04:27.129650 7 log.go:172] (0x4000e24210) Data frame received for 5 I1127 22:04:27.129849 7 log.go:172] (0x400259e000) (5) Data frame handling I1127 22:04:27.129973 7 log.go:172] (0x4002e4e000) (3) Data frame sent I1127 22:04:27.130109 7 log.go:172] (0x4000e24210) Data frame received for 3 I1127 22:04:27.130223 7 log.go:172] (0x4002e4e000) (3) Data frame handling I1127 22:04:27.131151 7 log.go:172] (0x4000e24210) Data frame received for 1 I1127 22:04:27.131295 7 log.go:172] (0x4002432280) (1) Data frame handling I1127 22:04:27.131411 7 log.go:172] (0x4002432280) (1) Data frame sent I1127 22:04:27.131528 7 log.go:172] (0x4000e24210) (0x4002432280) Stream removed, broadcasting: 1 I1127 22:04:27.131658 7 log.go:172] (0x4000e24210) Go away received I1127 22:04:27.131972 7 log.go:172] (0x4000e24210) (0x4002432280) Stream removed, broadcasting: 1 I1127 22:04:27.132179 7 log.go:172] (0x4000e24210) (0x4002e4e000) Stream removed, broadcasting: 3 I1127 22:04:27.132367 7 log.go:172] (0x4000e24210) (0x400259e000) Stream removed, broadcasting: 5 Nov 27 22:04:27.132: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:04:27.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-631" for this suite. Nov 27 22:04:51.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:04:51.328: INFO: namespace pod-network-test-631 deletion completed in 24.184217849s • [SLOW TEST:50.748 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:04:51.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 27 22:04:51.411: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd3f6df6-ab81-4a32-9160-0336022d1050" in namespace "downward-api-9988" to be "success or failure" Nov 27 22:04:51.434: INFO: Pod "downwardapi-volume-cd3f6df6-ab81-4a32-9160-0336022d1050": Phase="Pending", Reason="", readiness=false. Elapsed: 22.893902ms Nov 27 22:04:53.707: INFO: Pod "downwardapi-volume-cd3f6df6-ab81-4a32-9160-0336022d1050": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295328063s Nov 27 22:04:55.713: INFO: Pod "downwardapi-volume-cd3f6df6-ab81-4a32-9160-0336022d1050": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.301379678s STEP: Saw pod success Nov 27 22:04:55.713: INFO: Pod "downwardapi-volume-cd3f6df6-ab81-4a32-9160-0336022d1050" satisfied condition "success or failure" Nov 27 22:04:55.718: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cd3f6df6-ab81-4a32-9160-0336022d1050 container client-container: STEP: delete the pod Nov 27 22:04:55.791: INFO: Waiting for pod downwardapi-volume-cd3f6df6-ab81-4a32-9160-0336022d1050 to disappear Nov 27 22:04:55.797: INFO: Pod downwardapi-volume-cd3f6df6-ab81-4a32-9160-0336022d1050 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:04:55.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9988" for this suite. Nov 27 22:05:01.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:05:01.989: INFO: namespace downward-api-9988 deletion completed in 6.186249339s • [SLOW TEST:10.657 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:05:01.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Nov 27 22:05:02.098: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9743" to be "success or failure" Nov 27 22:05:02.104: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.335541ms Nov 27 22:05:04.111: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01330436s Nov 27 22:05:06.144: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046003787s Nov 27 22:05:08.150: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052533931s STEP: Saw pod success Nov 27 22:05:08.151: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Nov 27 22:05:08.156: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Nov 27 22:05:08.194: INFO: Waiting for pod pod-host-path-test to disappear Nov 27 22:05:08.206: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:05:08.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9743" for this suite. Nov 27 22:05:14.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:05:14.406: INFO: namespace hostpath-9743 deletion completed in 6.190916479s • [SLOW TEST:12.415 seconds] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:05:14.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Nov 27 22:05:14.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3943' Nov 27 22:05:15.840: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Nov 27 22:05:15.840: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Nov 27 22:05:15.850: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-p8fjm] Nov 27 22:05:15.850: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-p8fjm" in namespace "kubectl-3943" to be "running and ready" Nov 27 22:05:15.854: INFO: Pod "e2e-test-nginx-rc-p8fjm": Phase="Pending", Reason="", readiness=false. Elapsed: 3.261691ms Nov 27 22:05:17.861: INFO: Pod "e2e-test-nginx-rc-p8fjm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011017196s Nov 27 22:05:19.868: INFO: Pod "e2e-test-nginx-rc-p8fjm": Phase="Running", Reason="", readiness=true. Elapsed: 4.017794927s Nov 27 22:05:19.868: INFO: Pod "e2e-test-nginx-rc-p8fjm" satisfied condition "running and ready" Nov 27 22:05:19.869: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-p8fjm] Nov 27 22:05:19.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-3943' Nov 27 22:05:21.243: INFO: stderr: "" Nov 27 22:05:21.243: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Nov 27 22:05:21.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3943' Nov 27 22:05:22.468: INFO: stderr: "" Nov 27 22:05:22.468: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:05:22.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3943" for this suite. Nov 27 22:05:28.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:05:28.698: INFO: namespace kubectl-3943 deletion completed in 6.220634435s • [SLOW TEST:14.292 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:05:28.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:05:54.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1426" for this suite. Nov 27 22:06:01.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:06:01.169: INFO: namespace namespaces-1426 deletion completed in 6.173281748s STEP: Destroying namespace "nsdeletetest-6042" for this suite. Nov 27 22:06:01.172: INFO: Namespace nsdeletetest-6042 was already deleted STEP: Destroying namespace "nsdeletetest-4808" for this suite. Nov 27 22:06:07.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:06:07.362: INFO: namespace nsdeletetest-4808 deletion completed in 6.189066564s • [SLOW TEST:38.662 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:06:07.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-dcbea522-9661-4a1c-89f8-68df68d30b58 STEP: Creating secret with name s-test-opt-upd-5ece70c8-13dd-4755-a055-f85b5f54d719 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-dcbea522-9661-4a1c-89f8-68df68d30b58 STEP: Updating secret s-test-opt-upd-5ece70c8-13dd-4755-a055-f85b5f54d719 STEP: Creating secret with name s-test-opt-create-8d6a07b8-ebe7-4eca-baf2-0dca1b3e8dfb STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:06:15.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1806" for this suite. Nov 27 22:06:37.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:06:37.888: INFO: namespace projected-1806 deletion completed in 22.254308031s • [SLOW TEST:30.526 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:06:37.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 27 22:06:38.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a2073f8-1aa6-4588-be24-108d5d5c5c55" in namespace "downward-api-1822" to be "success or failure" Nov 27 22:06:38.033: INFO: Pod "downwardapi-volume-8a2073f8-1aa6-4588-be24-108d5d5c5c55": Phase="Pending", Reason="", readiness=false. Elapsed: 28.691649ms Nov 27 22:06:40.094: INFO: Pod "downwardapi-volume-8a2073f8-1aa6-4588-be24-108d5d5c5c55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089651898s Nov 27 22:06:42.100: INFO: Pod "downwardapi-volume-8a2073f8-1aa6-4588-be24-108d5d5c5c55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096377542s STEP: Saw pod success Nov 27 22:06:42.101: INFO: Pod "downwardapi-volume-8a2073f8-1aa6-4588-be24-108d5d5c5c55" satisfied condition "success or failure" Nov 27 22:06:42.137: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8a2073f8-1aa6-4588-be24-108d5d5c5c55 container client-container: STEP: delete the pod Nov 27 22:06:42.189: INFO: Waiting for pod downwardapi-volume-8a2073f8-1aa6-4588-be24-108d5d5c5c55 to disappear Nov 27 22:06:42.196: INFO: Pod downwardapi-volume-8a2073f8-1aa6-4588-be24-108d5d5c5c55 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:06:42.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1822" for this suite. Nov 27 22:06:48.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:06:48.379: INFO: namespace downward-api-1822 deletion completed in 6.175077228s • [SLOW TEST:10.489 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:06:48.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Nov 27 22:06:48.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-326' Nov 27 22:06:50.198: INFO: stderr: "" Nov 27 22:06:50.199: INFO: stdout: "pod/pause created\n" Nov 27 22:06:50.199: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Nov 27 22:06:50.199: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-326" to be "running and ready" Nov 27 22:06:50.215: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.736427ms Nov 27 22:06:52.221: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022234947s Nov 27 22:06:54.229: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.029825553s Nov 27 22:06:54.229: INFO: Pod "pause" satisfied condition "running and ready" Nov 27 22:06:54.229: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Nov 27 22:06:54.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-326' Nov 27 22:06:55.495: INFO: stderr: "" Nov 27 22:06:55.496: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Nov 27 22:06:55.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-326' Nov 27 22:06:56.768: INFO: stderr: "" Nov 27 22:06:56.768: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Nov 27 22:06:56.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-326' Nov 27 22:06:58.037: INFO: stderr: "" Nov 27 22:06:58.037: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Nov 27 22:06:58.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-326' Nov 27 22:06:59.317: INFO: stderr: "" Nov 27 22:06:59.317: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Nov 27 22:06:59.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-326' Nov 27 22:07:00.575: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 27 22:07:00.576: INFO: stdout: "pod \"pause\" force deleted\n" Nov 27 22:07:00.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-326' Nov 27 22:07:01.849: INFO: stderr: "No resources found.\n" Nov 27 22:07:01.849: INFO: stdout: "" Nov 27 22:07:01.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-326 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 27 22:07:03.124: INFO: stderr: "" Nov 27 22:07:03.124: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:07:03.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-326" for this suite. Nov 27 22:07:09.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:07:09.305: INFO: namespace kubectl-326 deletion completed in 6.172473794s • [SLOW TEST:20.925 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:07:09.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 27 22:07:12.493: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:07:12.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5414" for this suite. Nov 27 22:07:18.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:07:18.731: INFO: namespace container-runtime-5414 deletion completed in 6.202573797s • [SLOW TEST:9.424 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:07:18.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 27 22:07:18.833: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d95f632a-877d-4b33-ae0d-8ea951d02e31" in namespace "downward-api-6246" to be "success or failure" Nov 27 22:07:18.906: INFO: Pod "downwardapi-volume-d95f632a-877d-4b33-ae0d-8ea951d02e31": Phase="Pending", Reason="", readiness=false. Elapsed: 73.118516ms Nov 27 22:07:20.914: INFO: Pod "downwardapi-volume-d95f632a-877d-4b33-ae0d-8ea951d02e31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081137131s Nov 27 22:07:22.922: INFO: Pod "downwardapi-volume-d95f632a-877d-4b33-ae0d-8ea951d02e31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089152048s STEP: Saw pod success Nov 27 22:07:22.922: INFO: Pod "downwardapi-volume-d95f632a-877d-4b33-ae0d-8ea951d02e31" satisfied condition "success or failure" Nov 27 22:07:22.927: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d95f632a-877d-4b33-ae0d-8ea951d02e31 container client-container: STEP: delete the pod Nov 27 22:07:22.947: INFO: Waiting for pod downwardapi-volume-d95f632a-877d-4b33-ae0d-8ea951d02e31 to disappear Nov 27 22:07:22.962: INFO: Pod downwardapi-volume-d95f632a-877d-4b33-ae0d-8ea951d02e31 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 27 22:07:22.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6246" for this suite. Nov 27 22:07:29.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 27 22:07:29.198: INFO: namespace downward-api-6246 deletion completed in 6.204127415s • [SLOW TEST:10.464 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 27 22:07:29.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 27 22:07:29.342: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Nov 27 22:07:35.716: INFO: Waiting up to 5m0s for pod "pod-0fd9326c-a707-40ca-8fc2-4cf1b854f933" in namespace "emptydir-4679" to be "success or failure"
Nov 27 22:07:35.725: INFO: Pod "pod-0fd9326c-a707-40ca-8fc2-4cf1b854f933": Phase="Pending", Reason="", readiness=false. Elapsed: 8.999932ms
Nov 27 22:07:37.732: INFO: Pod "pod-0fd9326c-a707-40ca-8fc2-4cf1b854f933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015214868s
Nov 27 22:07:39.810: INFO: Pod "pod-0fd9326c-a707-40ca-8fc2-4cf1b854f933": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094013267s
STEP: Saw pod success
Nov 27 22:07:39.811: INFO: Pod "pod-0fd9326c-a707-40ca-8fc2-4cf1b854f933" satisfied condition "success or failure"
Nov 27 22:07:39.815: INFO: Trying to get logs from node iruya-worker2 pod pod-0fd9326c-a707-40ca-8fc2-4cf1b854f933 container test-container: 
STEP: delete the pod
Nov 27 22:07:39.895: INFO: Waiting for pod pod-0fd9326c-a707-40ca-8fc2-4cf1b854f933 to disappear
Nov 27 22:07:40.033: INFO: Pod pod-0fd9326c-a707-40ca-8fc2-4cf1b854f933 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:07:40.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4679" for this suite.
Nov 27 22:07:46.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:07:46.236: INFO: namespace emptydir-4679 deletion completed in 6.190744909s

• [SLOW TEST:10.599 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:07:46.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Nov 27 22:07:46.362: INFO: Waiting up to 5m0s for pod "pod-2210797e-ed26-4979-bd7f-be6dc6deb813" in namespace "emptydir-6922" to be "success or failure"
Nov 27 22:07:46.370: INFO: Pod "pod-2210797e-ed26-4979-bd7f-be6dc6deb813": Phase="Pending", Reason="", readiness=false. Elapsed: 7.978841ms
Nov 27 22:07:48.375: INFO: Pod "pod-2210797e-ed26-4979-bd7f-be6dc6deb813": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012991771s
Nov 27 22:07:50.382: INFO: Pod "pod-2210797e-ed26-4979-bd7f-be6dc6deb813": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020176768s
STEP: Saw pod success
Nov 27 22:07:50.383: INFO: Pod "pod-2210797e-ed26-4979-bd7f-be6dc6deb813" satisfied condition "success or failure"
Nov 27 22:07:50.387: INFO: Trying to get logs from node iruya-worker2 pod pod-2210797e-ed26-4979-bd7f-be6dc6deb813 container test-container: 
STEP: delete the pod
Nov 27 22:07:50.419: INFO: Waiting for pod pod-2210797e-ed26-4979-bd7f-be6dc6deb813 to disappear
Nov 27 22:07:50.436: INFO: Pod pod-2210797e-ed26-4979-bd7f-be6dc6deb813 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:07:50.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6922" for this suite.
Nov 27 22:07:56.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:07:56.662: INFO: namespace emptydir-6922 deletion completed in 6.191787128s

• [SLOW TEST:10.425 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:07:56.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:08:00.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3449" for this suite.
Nov 27 22:08:50.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:08:51.043: INFO: namespace kubelet-test-3449 deletion completed in 50.183911329s

• [SLOW TEST:54.380 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:08:51.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Nov 27 22:08:51.111: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix018634870/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:08:52.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8816" for this suite.
Nov 27 22:08:58.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:08:58.375: INFO: namespace kubectl-8816 deletion completed in 6.214115972s

• [SLOW TEST:7.331 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:08:58.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6163.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6163.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 27 22:09:06.570: INFO: DNS probes using dns-test-d1b5333a-3562-492c-be46-9224136d36b4 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6163.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6163.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 27 22:09:12.780: INFO: File wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local from pod  dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 27 22:09:12.785: INFO: File jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local from pod  dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 27 22:09:12.785: INFO: Lookups using dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 failed for: [wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local]

Nov 27 22:09:17.798: INFO: File wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local from pod  dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 27 22:09:17.802: INFO: File jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local from pod  dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 27 22:09:17.802: INFO: Lookups using dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 failed for: [wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local]

Nov 27 22:09:22.792: INFO: File wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local from pod  dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 27 22:09:22.797: INFO: File jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local from pod  dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 27 22:09:22.797: INFO: Lookups using dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 failed for: [wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local]

Nov 27 22:09:27.792: INFO: File wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local from pod  dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 27 22:09:27.797: INFO: File jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local from pod  dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 27 22:09:27.797: INFO: Lookups using dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 failed for: [wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local]

Nov 27 22:09:32.793: INFO: File wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local from pod  dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 27 22:09:32.798: INFO: File jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local from pod  dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 contains 'foo.example.com.
' instead of 'bar.example.com.'
Nov 27 22:09:32.798: INFO: Lookups using dns-6163/dns-test-56bde1a3-76cc-4387-a88d-fda421561758 failed for: [wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local]

Nov 27 22:09:37.798: INFO: DNS probes using dns-test-56bde1a3-76cc-4387-a88d-fda421561758 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6163.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6163.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6163.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6163.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 27 22:09:44.487: INFO: DNS probes using dns-test-90d6c057-4a94-4259-a118-ba18d7570b2e succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:09:44.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6163" for this suite.
Nov 27 22:09:50.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:09:51.161: INFO: namespace dns-6163 deletion completed in 6.331668807s

• [SLOW TEST:52.784 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:09:51.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-13e56be8-a476-46e6-9651-0f299023c119
STEP: Creating a pod to test consume secrets
Nov 27 22:09:51.289: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-df201be9-1167-485b-a363-4bef467dcd62" in namespace "projected-619" to be "success or failure"
Nov 27 22:09:51.307: INFO: Pod "pod-projected-secrets-df201be9-1167-485b-a363-4bef467dcd62": Phase="Pending", Reason="", readiness=false. Elapsed: 18.180818ms
Nov 27 22:09:53.314: INFO: Pod "pod-projected-secrets-df201be9-1167-485b-a363-4bef467dcd62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025414656s
Nov 27 22:09:55.320: INFO: Pod "pod-projected-secrets-df201be9-1167-485b-a363-4bef467dcd62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031051036s
STEP: Saw pod success
Nov 27 22:09:55.320: INFO: Pod "pod-projected-secrets-df201be9-1167-485b-a363-4bef467dcd62" satisfied condition "success or failure"
Nov 27 22:09:55.325: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-df201be9-1167-485b-a363-4bef467dcd62 container projected-secret-volume-test: 
STEP: delete the pod
Nov 27 22:09:55.348: INFO: Waiting for pod pod-projected-secrets-df201be9-1167-485b-a363-4bef467dcd62 to disappear
Nov 27 22:09:55.352: INFO: Pod pod-projected-secrets-df201be9-1167-485b-a363-4bef467dcd62 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:09:55.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-619" for this suite.
Nov 27 22:10:01.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:10:01.536: INFO: namespace projected-619 deletion completed in 6.175206225s

• [SLOW TEST:10.373 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:10:01.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 27 22:10:01.653: INFO: Waiting up to 5m0s for pod "downwardapi-volume-394480b6-a654-4838-a40c-51f3e5e35b7c" in namespace "projected-5446" to be "success or failure"
Nov 27 22:10:01.658: INFO: Pod "downwardapi-volume-394480b6-a654-4838-a40c-51f3e5e35b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.680458ms
Nov 27 22:10:03.666: INFO: Pod "downwardapi-volume-394480b6-a654-4838-a40c-51f3e5e35b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012550573s
Nov 27 22:10:05.674: INFO: Pod "downwardapi-volume-394480b6-a654-4838-a40c-51f3e5e35b7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020548973s
STEP: Saw pod success
Nov 27 22:10:05.674: INFO: Pod "downwardapi-volume-394480b6-a654-4838-a40c-51f3e5e35b7c" satisfied condition "success or failure"
Nov 27 22:10:05.678: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-394480b6-a654-4838-a40c-51f3e5e35b7c container client-container: 
STEP: delete the pod
Nov 27 22:10:05.710: INFO: Waiting for pod downwardapi-volume-394480b6-a654-4838-a40c-51f3e5e35b7c to disappear
Nov 27 22:10:05.714: INFO: Pod downwardapi-volume-394480b6-a654-4838-a40c-51f3e5e35b7c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:10:05.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5446" for this suite.
Nov 27 22:10:11.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:10:11.898: INFO: namespace projected-5446 deletion completed in 6.17579051s

• [SLOW TEST:10.361 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:10:11.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Nov 27 22:10:12.005: INFO: Waiting up to 5m0s for pod "pod-aa6aa4f1-dfbe-4577-8e8b-01da662a14d7" in namespace "emptydir-6262" to be "success or failure"
Nov 27 22:10:12.027: INFO: Pod "pod-aa6aa4f1-dfbe-4577-8e8b-01da662a14d7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.081275ms
Nov 27 22:10:14.034: INFO: Pod "pod-aa6aa4f1-dfbe-4577-8e8b-01da662a14d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02831765s
Nov 27 22:10:16.040: INFO: Pod "pod-aa6aa4f1-dfbe-4577-8e8b-01da662a14d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03465804s
STEP: Saw pod success
Nov 27 22:10:16.041: INFO: Pod "pod-aa6aa4f1-dfbe-4577-8e8b-01da662a14d7" satisfied condition "success or failure"
Nov 27 22:10:16.046: INFO: Trying to get logs from node iruya-worker2 pod pod-aa6aa4f1-dfbe-4577-8e8b-01da662a14d7 container test-container: 
STEP: delete the pod
Nov 27 22:10:16.088: INFO: Waiting for pod pod-aa6aa4f1-dfbe-4577-8e8b-01da662a14d7 to disappear
Nov 27 22:10:16.097: INFO: Pod pod-aa6aa4f1-dfbe-4577-8e8b-01da662a14d7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:10:16.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6262" for this suite.
Nov 27 22:10:22.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:10:22.289: INFO: namespace emptydir-6262 deletion completed in 6.18231862s

• [SLOW TEST:10.390 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:10:22.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Nov 27 22:10:22.367: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:10:35.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-305" for this suite.
Nov 27 22:10:41.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:10:41.623: INFO: namespace pods-305 deletion completed in 6.217168063s

• [SLOW TEST:19.332 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:10:41.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Nov 27 22:10:41.683: INFO: Waiting up to 5m0s for pod "downward-api-5b8d1301-da44-464f-8361-3fa3300e50e1" in namespace "downward-api-7068" to be "success or failure"
Nov 27 22:10:41.702: INFO: Pod "downward-api-5b8d1301-da44-464f-8361-3fa3300e50e1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.901218ms
Nov 27 22:10:43.708: INFO: Pod "downward-api-5b8d1301-da44-464f-8361-3fa3300e50e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025134659s
Nov 27 22:10:45.717: INFO: Pod "downward-api-5b8d1301-da44-464f-8361-3fa3300e50e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033894187s
STEP: Saw pod success
Nov 27 22:10:45.717: INFO: Pod "downward-api-5b8d1301-da44-464f-8361-3fa3300e50e1" satisfied condition "success or failure"
Nov 27 22:10:45.721: INFO: Trying to get logs from node iruya-worker2 pod downward-api-5b8d1301-da44-464f-8361-3fa3300e50e1 container dapi-container: 
STEP: delete the pod
Nov 27 22:10:45.744: INFO: Waiting for pod downward-api-5b8d1301-da44-464f-8361-3fa3300e50e1 to disappear
Nov 27 22:10:45.748: INFO: Pod downward-api-5b8d1301-da44-464f-8361-3fa3300e50e1 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:10:45.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7068" for this suite.
Nov 27 22:10:51.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:10:51.958: INFO: namespace downward-api-7068 deletion completed in 6.183929065s

• [SLOW TEST:10.334 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:10:51.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 27 22:10:52.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Nov 27 22:10:53.313: INFO: stderr: ""
Nov 27 22:10:53.313: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:31:02Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:10:53.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5883" for this suite.
Nov 27 22:10:59.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:10:59.571: INFO: namespace kubectl-5883 deletion completed in 6.247429695s

• [SLOW TEST:7.611 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:10:59.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 27 22:10:59.652: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab070a5e-e8ff-4f8e-97cd-6c0a13e07394" in namespace "projected-7218" to be "success or failure"
Nov 27 22:10:59.704: INFO: Pod "downwardapi-volume-ab070a5e-e8ff-4f8e-97cd-6c0a13e07394": Phase="Pending", Reason="", readiness=false. Elapsed: 51.956912ms
Nov 27 22:11:01.710: INFO: Pod "downwardapi-volume-ab070a5e-e8ff-4f8e-97cd-6c0a13e07394": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058518424s
Nov 27 22:11:03.718: INFO: Pod "downwardapi-volume-ab070a5e-e8ff-4f8e-97cd-6c0a13e07394": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06602758s
STEP: Saw pod success
Nov 27 22:11:03.718: INFO: Pod "downwardapi-volume-ab070a5e-e8ff-4f8e-97cd-6c0a13e07394" satisfied condition "success or failure"
Nov 27 22:11:03.723: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ab070a5e-e8ff-4f8e-97cd-6c0a13e07394 container client-container: 
STEP: delete the pod
Nov 27 22:11:03.746: INFO: Waiting for pod downwardapi-volume-ab070a5e-e8ff-4f8e-97cd-6c0a13e07394 to disappear
Nov 27 22:11:03.751: INFO: Pod downwardapi-volume-ab070a5e-e8ff-4f8e-97cd-6c0a13e07394 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:11:03.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7218" for this suite.
Nov 27 22:11:09.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:11:09.967: INFO: namespace projected-7218 deletion completed in 6.210326229s

• [SLOW TEST:10.395 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:11:09.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Nov 27 22:11:10.065: INFO: Waiting up to 5m0s for pod "client-containers-104ec022-b65b-4aaa-b4c6-c2416bbb7b06" in namespace "containers-9348" to be "success or failure"
Nov 27 22:11:10.072: INFO: Pod "client-containers-104ec022-b65b-4aaa-b4c6-c2416bbb7b06": Phase="Pending", Reason="", readiness=false. Elapsed: 7.694818ms
Nov 27 22:11:12.078: INFO: Pod "client-containers-104ec022-b65b-4aaa-b4c6-c2416bbb7b06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01313059s
Nov 27 22:11:14.085: INFO: Pod "client-containers-104ec022-b65b-4aaa-b4c6-c2416bbb7b06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020691713s
STEP: Saw pod success
Nov 27 22:11:14.086: INFO: Pod "client-containers-104ec022-b65b-4aaa-b4c6-c2416bbb7b06" satisfied condition "success or failure"
Nov 27 22:11:14.090: INFO: Trying to get logs from node iruya-worker2 pod client-containers-104ec022-b65b-4aaa-b4c6-c2416bbb7b06 container test-container: 
STEP: delete the pod
Nov 27 22:11:14.122: INFO: Waiting for pod client-containers-104ec022-b65b-4aaa-b4c6-c2416bbb7b06 to disappear
Nov 27 22:11:14.140: INFO: Pod client-containers-104ec022-b65b-4aaa-b4c6-c2416bbb7b06 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:11:14.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9348" for this suite.
Nov 27 22:11:20.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:11:20.354: INFO: namespace containers-9348 deletion completed in 6.204702304s

• [SLOW TEST:10.385 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:11:20.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Nov 27 22:11:20.452: INFO: Waiting up to 5m0s for pod "pod-1a3e65f6-a799-41e0-be30-21af79858fbe" in namespace "emptydir-6802" to be "success or failure"
Nov 27 22:11:20.486: INFO: Pod "pod-1a3e65f6-a799-41e0-be30-21af79858fbe": Phase="Pending", Reason="", readiness=false. Elapsed: 34.427665ms
Nov 27 22:11:22.492: INFO: Pod "pod-1a3e65f6-a799-41e0-be30-21af79858fbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040319409s
Nov 27 22:11:24.499: INFO: Pod "pod-1a3e65f6-a799-41e0-be30-21af79858fbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046979535s
STEP: Saw pod success
Nov 27 22:11:24.499: INFO: Pod "pod-1a3e65f6-a799-41e0-be30-21af79858fbe" satisfied condition "success or failure"
Nov 27 22:11:24.504: INFO: Trying to get logs from node iruya-worker pod pod-1a3e65f6-a799-41e0-be30-21af79858fbe container test-container: 
STEP: delete the pod
Nov 27 22:11:24.544: INFO: Waiting for pod pod-1a3e65f6-a799-41e0-be30-21af79858fbe to disappear
Nov 27 22:11:24.551: INFO: Pod pod-1a3e65f6-a799-41e0-be30-21af79858fbe no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:11:24.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6802" for this suite.
Nov 27 22:11:30.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:11:30.760: INFO: namespace emptydir-6802 deletion completed in 6.201950652s

• [SLOW TEST:10.405 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:11:30.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Nov 27 22:11:30.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-9301'
Nov 27 22:11:32.134: INFO: stderr: ""
Nov 27 22:11:32.134: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Nov 27 22:11:37.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-9301 -o json'
Nov 27 22:11:38.450: INFO: stderr: ""
Nov 27 22:11:38.450: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-11-27T22:11:32Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-9301\",\n        \"resourceVersion\": \"11925663\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9301/pods/e2e-test-nginx-pod\",\n        \"uid\": \"126f83aa-ff4a-40da-b822-0ad32f0ffbfd\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-vvh29\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-vvh29\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-vvh29\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-11-27T22:11:32Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-11-27T22:11:35Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-11-27T22:11:35Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-11-27T22:11:32Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://caa91783f806c2393404e6e3a7981679c624a603b69d3c04e91a78256715fd4f\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-11-27T22:11:34Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.6\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.86\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-11-27T22:11:32Z\"\n    }\n}\n"
STEP: replace the image in the pod
Nov 27 22:11:38.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9301'
Nov 27 22:11:40.134: INFO: stderr: ""
Nov 27 22:11:40.134: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Nov 27 22:11:40.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9301'
Nov 27 22:11:45.380: INFO: stderr: ""
Nov 27 22:11:45.380: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:11:45.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9301" for this suite.
Nov 27 22:11:51.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:11:51.586: INFO: namespace kubectl-9301 deletion completed in 6.194541634s

• [SLOW TEST:20.822 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:11:51.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Nov 27 22:11:55.789: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-a284ff7f-ba86-4cd4-a5c5-548def58a770,GenerateName:,Namespace:events-994,SelfLink:/api/v1/namespaces/events-994/pods/send-events-a284ff7f-ba86-4cd4-a5c5-548def58a770,UID:fea2118f-f652-420c-8176-0007b3a2834a,ResourceVersion:11925736,Generation:0,CreationTimestamp:2020-11-27 22:11:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 735553266,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2tnxf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tnxf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-2tnxf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x4002825c80} {node.kubernetes.io/unreachable Exists  NoExecute 0x4002825ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 22:11:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 22:11:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 22:11:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-27 22:11:51 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.87,StartTime:2020-11-27 22:11:51 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-11-27 22:11:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://f51a43bbfd51c76428cf38a908a31f280a4005fda87dcb5e358cb164980d8871}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Nov 27 22:11:57.801: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Nov 27 22:11:59.811: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:11:59.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-994" for this suite.
Nov 27 22:12:37.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:12:38.049: INFO: namespace events-994 deletion completed in 38.178931985s

• [SLOW TEST:46.463 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:12:38.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Nov 27 22:12:38.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3717'
Nov 27 22:12:42.363: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Nov 27 22:12:42.364: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Nov 27 22:12:42.431: INFO: scanned /root for discovery docs: 
Nov 27 22:12:42.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3717'
Nov 27 22:13:00.092: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Nov 27 22:13:00.092: INFO: stdout: "Created e2e-test-nginx-rc-23e2861eae3bea1169ec2cbc5bffbbea\nScaling up e2e-test-nginx-rc-23e2861eae3bea1169ec2cbc5bffbbea from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-23e2861eae3bea1169ec2cbc5bffbbea up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-23e2861eae3bea1169ec2cbc5bffbbea to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Nov 27 22:13:00.092: INFO: stdout: "Created e2e-test-nginx-rc-23e2861eae3bea1169ec2cbc5bffbbea\nScaling up e2e-test-nginx-rc-23e2861eae3bea1169ec2cbc5bffbbea from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-23e2861eae3bea1169ec2cbc5bffbbea up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-23e2861eae3bea1169ec2cbc5bffbbea to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Nov 27 22:13:00.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3717'
Nov 27 22:13:01.360: INFO: stderr: ""
Nov 27 22:13:01.361: INFO: stdout: "e2e-test-nginx-rc-23e2861eae3bea1169ec2cbc5bffbbea-f6cwf "
Nov 27 22:13:01.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-23e2861eae3bea1169ec2cbc5bffbbea-f6cwf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3717'
Nov 27 22:13:02.592: INFO: stderr: ""
Nov 27 22:13:02.593: INFO: stdout: "true"
Nov 27 22:13:02.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-23e2861eae3bea1169ec2cbc5bffbbea-f6cwf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3717'
Nov 27 22:13:03.865: INFO: stderr: ""
Nov 27 22:13:03.865: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Nov 27 22:13:03.865: INFO: e2e-test-nginx-rc-23e2861eae3bea1169ec2cbc5bffbbea-f6cwf is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Nov 27 22:13:03.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3717'
Nov 27 22:13:05.142: INFO: stderr: ""
Nov 27 22:13:05.143: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:13:05.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3717" for this suite.
Nov 27 22:13:11.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:13:11.357: INFO: namespace kubectl-3717 deletion completed in 6.205439918s

• [SLOW TEST:33.301 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:13:11.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-bmlrm in namespace proxy-208
I1127 22:13:11.491270       7 runners.go:180] Created replication controller with name: proxy-service-bmlrm, namespace: proxy-208, replica count: 1
I1127 22:13:12.542871       7 runners.go:180] proxy-service-bmlrm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1127 22:13:13.543672       7 runners.go:180] proxy-service-bmlrm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1127 22:13:14.544547       7 runners.go:180] proxy-service-bmlrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1127 22:13:15.545297       7 runners.go:180] proxy-service-bmlrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1127 22:13:16.545868       7 runners.go:180] proxy-service-bmlrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1127 22:13:17.546515       7 runners.go:180] proxy-service-bmlrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1127 22:13:18.547342       7 runners.go:180] proxy-service-bmlrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1127 22:13:19.548326       7 runners.go:180] proxy-service-bmlrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1127 22:13:20.549223       7 runners.go:180] proxy-service-bmlrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1127 22:13:21.549932       7 runners.go:180] proxy-service-bmlrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1127 22:13:22.550631       7 runners.go:180] proxy-service-bmlrm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1127 22:13:23.551529       7 runners.go:180] proxy-service-bmlrm Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov 27 22:13:23.566: INFO: setup took 12.142170307s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Nov 27 22:13:23.577: INFO: (0) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 9.207361ms)
Nov 27 22:13:23.578: INFO: (0) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 10.187356ms)
Nov 27 22:13:23.578: INFO: (0) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 9.515811ms)
Nov 27 22:13:23.578: INFO: (0) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8/proxy/: test (200; 10.914006ms)
Nov 27 22:13:23.578: INFO: (0) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:1080/proxy/: t... (200; 11.019424ms)
Nov 27 22:13:23.580: INFO: (0) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 11.37114ms)
Nov 27 22:13:23.580: INFO: (0) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:1080/proxy/: testtest (200; 6.56005ms)
Nov 27 22:13:23.616: INFO: (1) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 7.498405ms)
Nov 27 22:13:23.617: INFO: (1) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:1080/proxy/: t... (200; 7.941669ms)
Nov 27 22:13:23.617: INFO: (1) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: testtestt... (200; 6.891093ms)
Nov 27 22:13:23.629: INFO: (2) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:460/proxy/: tls baz (200; 7.345332ms)
Nov 27 22:13:23.629: INFO: (2) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8/proxy/: test (200; 7.365667ms)
Nov 27 22:13:23.629: INFO: (2) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 7.170879ms)
Nov 27 22:13:23.629: INFO: (2) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname2/proxy/: bar (200; 7.175593ms)
Nov 27 22:13:23.629: INFO: (2) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 7.307653ms)
Nov 27 22:13:23.629: INFO: (2) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname2/proxy/: bar (200; 7.361647ms)
Nov 27 22:13:23.629: INFO: (2) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname1/proxy/: foo (200; 7.6259ms)
Nov 27 22:13:23.629: INFO: (2) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 7.272451ms)
Nov 27 22:13:23.629: INFO: (2) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname2/proxy/: tls qux (200; 7.65624ms)
Nov 27 22:13:23.629: INFO: (2) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname1/proxy/: tls baz (200; 7.810093ms)
Nov 27 22:13:23.630: INFO: (2) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 8.071478ms)
Nov 27 22:13:23.630: INFO: (2) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 8.161683ms)
Nov 27 22:13:23.634: INFO: (3) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 3.830654ms)
Nov 27 22:13:23.635: INFO: (3) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 4.911319ms)
Nov 27 22:13:23.636: INFO: (3) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname1/proxy/: foo (200; 5.364147ms)
Nov 27 22:13:23.636: INFO: (3) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: t... (200; 5.264006ms)
Nov 27 22:13:23.637: INFO: (3) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:460/proxy/: tls baz (200; 6.401914ms)
Nov 27 22:13:23.637: INFO: (3) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname1/proxy/: foo (200; 6.00114ms)
Nov 27 22:13:23.637: INFO: (3) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 6.900549ms)
Nov 27 22:13:23.637: INFO: (3) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 7.084548ms)
Nov 27 22:13:23.638: INFO: (3) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 7.399228ms)
Nov 27 22:13:23.638: INFO: (3) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:1080/proxy/: testtest (200; 7.517074ms)
Nov 27 22:13:23.638: INFO: (3) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname2/proxy/: bar (200; 7.933344ms)
Nov 27 22:13:23.639: INFO: (3) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname2/proxy/: bar (200; 8.316168ms)
Nov 27 22:13:23.639: INFO: (3) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname1/proxy/: tls baz (200; 8.123323ms)
Nov 27 22:13:23.642: INFO: (4) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 3.181815ms)
Nov 27 22:13:23.644: INFO: (4) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8/proxy/: test (200; 4.830243ms)
Nov 27 22:13:23.645: INFO: (4) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:460/proxy/: tls baz (200; 5.536256ms)
Nov 27 22:13:23.645: INFO: (4) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname1/proxy/: foo (200; 5.375631ms)
Nov 27 22:13:23.645: INFO: (4) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname1/proxy/: tls baz (200; 5.448228ms)
Nov 27 22:13:23.645: INFO: (4) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname2/proxy/: bar (200; 5.842402ms)
Nov 27 22:13:23.645: INFO: (4) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 5.975843ms)
Nov 27 22:13:23.645: INFO: (4) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 5.932478ms)
Nov 27 22:13:23.645: INFO: (4) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:1080/proxy/: testt... (200; 6.531198ms)
Nov 27 22:13:23.646: INFO: (4) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 6.739482ms)
Nov 27 22:13:23.646: INFO: (4) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 6.983806ms)
Nov 27 22:13:23.646: INFO: (4) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname2/proxy/: tls qux (200; 7.091387ms)
Nov 27 22:13:23.651: INFO: (5) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 4.317874ms)
Nov 27 22:13:23.652: INFO: (5) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname2/proxy/: bar (200; 5.625154ms)
Nov 27 22:13:23.653: INFO: (5) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 5.828706ms)
Nov 27 22:13:23.653: INFO: (5) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:1080/proxy/: t... (200; 6.370762ms)
Nov 27 22:13:23.653: INFO: (5) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:460/proxy/: tls baz (200; 6.641894ms)
Nov 27 22:13:23.653: INFO: (5) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname1/proxy/: foo (200; 6.954124ms)
Nov 27 22:13:23.654: INFO: (5) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: testtest (200; 9.290108ms)
Nov 27 22:13:23.656: INFO: (5) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname1/proxy/: foo (200; 9.384735ms)
Nov 27 22:13:23.656: INFO: (5) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname1/proxy/: tls baz (200; 9.753683ms)
Nov 27 22:13:23.657: INFO: (5) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname2/proxy/: bar (200; 10.04516ms)
Nov 27 22:13:23.661: INFO: (6) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 3.707563ms)
Nov 27 22:13:23.661: INFO: (6) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 3.736229ms)
Nov 27 22:13:23.665: INFO: (6) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname2/proxy/: tls qux (200; 7.592427ms)
Nov 27 22:13:23.665: INFO: (6) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname1/proxy/: foo (200; 7.507046ms)
Nov 27 22:13:23.665: INFO: (6) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:1080/proxy/: t... (200; 7.533189ms)
Nov 27 22:13:23.665: INFO: (6) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname1/proxy/: tls baz (200; 7.689625ms)
Nov 27 22:13:23.665: INFO: (6) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:460/proxy/: tls baz (200; 7.939487ms)
Nov 27 22:13:23.665: INFO: (6) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: test (200; 8.166483ms)
Nov 27 22:13:23.665: INFO: (6) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 7.809519ms)
Nov 27 22:13:23.665: INFO: (6) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 8.191406ms)
Nov 27 22:13:23.665: INFO: (6) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname2/proxy/: bar (200; 8.419155ms)
Nov 27 22:13:23.665: INFO: (6) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:1080/proxy/: testt... (200; 6.251716ms)
Nov 27 22:13:23.672: INFO: (7) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 6.207118ms)
Nov 27 22:13:23.672: INFO: (7) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname1/proxy/: foo (200; 6.686413ms)
Nov 27 22:13:23.673: INFO: (7) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:1080/proxy/: testtest (200; 7.638709ms)
Nov 27 22:13:23.674: INFO: (7) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: test (200; 3.553355ms)
Nov 27 22:13:23.679: INFO: (8) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:1080/proxy/: testt... (200; 5.826737ms)
Nov 27 22:13:23.681: INFO: (8) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:460/proxy/: tls baz (200; 6.133817ms)
Nov 27 22:13:23.681: INFO: (8) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname1/proxy/: tls baz (200; 6.474491ms)
Nov 27 22:13:23.681: INFO: (8) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 6.758015ms)
Nov 27 22:13:23.681: INFO: (8) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname2/proxy/: bar (200; 6.897717ms)
Nov 27 22:13:23.681: INFO: (8) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 6.864944ms)
Nov 27 22:13:23.682: INFO: (8) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname2/proxy/: bar (200; 7.506653ms)
Nov 27 22:13:23.682: INFO: (8) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 7.784693ms)
Nov 27 22:13:23.682: INFO: (8) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname2/proxy/: tls qux (200; 7.80327ms)
Nov 27 22:13:23.682: INFO: (8) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname1/proxy/: foo (200; 7.933533ms)
Nov 27 22:13:23.686: INFO: (9) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:1080/proxy/: testtest (200; 5.321788ms)
Nov 27 22:13:23.688: INFO: (9) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:1080/proxy/: t... (200; 5.649053ms)
Nov 27 22:13:23.688: INFO: (9) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 5.616769ms)
Nov 27 22:13:23.688: INFO: (9) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname1/proxy/: foo (200; 5.733084ms)
Nov 27 22:13:23.688: INFO: (9) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 5.862622ms)
Nov 27 22:13:23.690: INFO: (9) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname2/proxy/: bar (200; 7.37905ms)
Nov 27 22:13:23.690: INFO: (9) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 7.550699ms)
Nov 27 22:13:23.690: INFO: (9) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 7.250329ms)
Nov 27 22:13:23.690: INFO: (9) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname2/proxy/: bar (200; 7.594871ms)
Nov 27 22:13:23.690: INFO: (9) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname1/proxy/: tls baz (200; 7.868854ms)
Nov 27 22:13:23.691: INFO: (9) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname2/proxy/: tls qux (200; 7.918712ms)
Nov 27 22:13:23.695: INFO: (10) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 4.23464ms)
Nov 27 22:13:23.695: INFO: (10) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8/proxy/: test (200; 4.28441ms)
Nov 27 22:13:23.695: INFO: (10) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:460/proxy/: tls baz (200; 4.125376ms)
Nov 27 22:13:23.695: INFO: (10) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 4.41475ms)
Nov 27 22:13:23.696: INFO: (10) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: testt... (200; 6.924416ms)
Nov 27 22:13:23.698: INFO: (10) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 6.818214ms)
Nov 27 22:13:23.698: INFO: (10) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname2/proxy/: bar (200; 7.345237ms)
Nov 27 22:13:23.699: INFO: (10) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname1/proxy/: foo (200; 7.772379ms)
Nov 27 22:13:23.699: INFO: (10) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname1/proxy/: tls baz (200; 7.600678ms)
Nov 27 22:13:23.699: INFO: (10) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname1/proxy/: foo (200; 7.745057ms)
Nov 27 22:13:23.699: INFO: (10) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 7.594736ms)
Nov 27 22:13:23.703: INFO: (11) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 3.871058ms)
Nov 27 22:13:23.704: INFO: (11) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: test (200; 3.469017ms)
Nov 27 22:13:23.704: INFO: (11) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 4.225766ms)
Nov 27 22:13:23.706: INFO: (11) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname1/proxy/: tls baz (200; 5.086409ms)
Nov 27 22:13:23.706: INFO: (11) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:1080/proxy/: t... (200; 5.104662ms)
Nov 27 22:13:23.706: INFO: (11) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 5.058351ms)
Nov 27 22:13:23.706: INFO: (11) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:1080/proxy/: testtest (200; 3.870328ms)
Nov 27 22:13:23.712: INFO: (12) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:460/proxy/: tls baz (200; 4.227417ms)
Nov 27 22:13:23.712: INFO: (12) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: t... (200; 4.94955ms)
Nov 27 22:13:23.713: INFO: (12) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 5.185008ms)
Nov 27 22:13:23.713: INFO: (12) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 5.15925ms)
Nov 27 22:13:23.714: INFO: (12) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname1/proxy/: tls baz (200; 5.887274ms)
Nov 27 22:13:23.714: INFO: (12) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 5.861391ms)
Nov 27 22:13:23.714: INFO: (12) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:1080/proxy/: testtestt... (200; 5.72641ms)
Nov 27 22:13:23.721: INFO: (13) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname2/proxy/: tls qux (200; 5.981204ms)
Nov 27 22:13:23.721: INFO: (13) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname1/proxy/: foo (200; 5.8546ms)
Nov 27 22:13:23.721: INFO: (13) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname2/proxy/: bar (200; 5.981436ms)
Nov 27 22:13:23.721: INFO: (13) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname2/proxy/: bar (200; 6.090016ms)
Nov 27 22:13:23.721: INFO: (13) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8/proxy/: test (200; 5.954297ms)
Nov 27 22:13:23.721: INFO: (13) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 6.161642ms)
Nov 27 22:13:23.722: INFO: (13) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 6.256794ms)
Nov 27 22:13:23.722: INFO: (13) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:460/proxy/: tls baz (200; 6.634125ms)
Nov 27 22:13:23.722: INFO: (13) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:160/proxy/: foo (200; 6.818223ms)
Nov 27 22:13:23.722: INFO: (13) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: t... (200; 5.741369ms)
Nov 27 22:13:23.729: INFO: (14) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 6.0134ms)
Nov 27 22:13:23.729: INFO: (14) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname2/proxy/: bar (200; 5.963249ms)
Nov 27 22:13:23.729: INFO: (14) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname1/proxy/: foo (200; 6.185003ms)
Nov 27 22:13:23.730: INFO: (14) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8/proxy/: test (200; 6.031991ms)
Nov 27 22:13:23.730: INFO: (14) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: testtest (200; 4.614854ms)
Nov 27 22:13:23.736: INFO: (15) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:1080/proxy/: t... (200; 4.883117ms)
Nov 27 22:13:23.736: INFO: (15) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:460/proxy/: tls baz (200; 5.08244ms)
Nov 27 22:13:23.736: INFO: (15) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:1080/proxy/: testtesttest (200; 5.721929ms)
Nov 27 22:13:23.744: INFO: (16) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:1080/proxy/: t... (200; 5.90474ms)
Nov 27 22:13:23.744: INFO: (16) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname2/proxy/: bar (200; 6.073856ms)
Nov 27 22:13:23.744: INFO: (16) /api/v1/namespaces/proxy-208/pods/http:proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 6.331302ms)
Nov 27 22:13:23.744: INFO: (16) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname1/proxy/: tls baz (200; 6.475899ms)
Nov 27 22:13:23.745: INFO: (16) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: test (200; 4.016963ms)
Nov 27 22:13:23.749: INFO: (17) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:1080/proxy/: testt... (200; 4.090468ms)
Nov 27 22:13:23.749: INFO: (17) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: testt... (200; 5.702125ms)
Nov 27 22:13:23.758: INFO: (18) /api/v1/namespaces/proxy-208/services/proxy-service-bmlrm:portname1/proxy/: foo (200; 5.696037ms)
Nov 27 22:13:23.758: INFO: (18) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8/proxy/: test (200; 5.579036ms)
Nov 27 22:13:23.758: INFO: (18) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname2/proxy/: tls qux (200; 5.898825ms)
Nov 27 22:13:23.758: INFO: (18) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 5.801937ms)
Nov 27 22:13:23.758: INFO: (18) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname1/proxy/: foo (200; 5.998317ms)
Nov 27 22:13:23.758: INFO: (18) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:443/proxy/: testt... (200; 5.82018ms)
Nov 27 22:13:23.765: INFO: (19) /api/v1/namespaces/proxy-208/services/https:proxy-service-bmlrm:tlsportname1/proxy/: tls baz (200; 6.133153ms)
Nov 27 22:13:23.765: INFO: (19) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8/proxy/: test (200; 6.041826ms)
Nov 27 22:13:23.765: INFO: (19) /api/v1/namespaces/proxy-208/services/http:proxy-service-bmlrm:portname1/proxy/: foo (200; 6.193626ms)
Nov 27 22:13:23.765: INFO: (19) /api/v1/namespaces/proxy-208/pods/https:proxy-service-bmlrm-bp8q8:462/proxy/: tls qux (200; 6.344851ms)
Nov 27 22:13:23.765: INFO: (19) /api/v1/namespaces/proxy-208/pods/proxy-service-bmlrm-bp8q8:162/proxy/: bar (200; 6.361678ms)
STEP: deleting ReplicationController proxy-service-bmlrm in namespace proxy-208, will wait for the garbage collector to delete the pods
Nov 27 22:13:23.826: INFO: Deleting ReplicationController proxy-service-bmlrm took: 6.42732ms
Nov 27 22:13:23.927: INFO: Terminating ReplicationController proxy-service-bmlrm pods took: 100.753085ms
[AfterEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:13:26.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-208" for this suite.
Nov 27 22:13:32.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:13:32.569: INFO: namespace proxy-208 deletion completed in 6.229234318s

• [SLOW TEST:21.210 seconds]
[sig-network] Proxy
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:13:32.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 27 22:13:32.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90a28267-cf1f-43c0-b158-5f8ffa0856a1" in namespace "projected-4011" to be "success or failure"
Nov 27 22:13:32.710: INFO: Pod "downwardapi-volume-90a28267-cf1f-43c0-b158-5f8ffa0856a1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.54713ms
Nov 27 22:13:34.718: INFO: Pod "downwardapi-volume-90a28267-cf1f-43c0-b158-5f8ffa0856a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028924937s
Nov 27 22:13:36.726: INFO: Pod "downwardapi-volume-90a28267-cf1f-43c0-b158-5f8ffa0856a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037113183s
STEP: Saw pod success
Nov 27 22:13:36.726: INFO: Pod "downwardapi-volume-90a28267-cf1f-43c0-b158-5f8ffa0856a1" satisfied condition "success or failure"
Nov 27 22:13:36.730: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-90a28267-cf1f-43c0-b158-5f8ffa0856a1 container client-container: 
STEP: delete the pod
Nov 27 22:13:36.755: INFO: Waiting for pod downwardapi-volume-90a28267-cf1f-43c0-b158-5f8ffa0856a1 to disappear
Nov 27 22:13:36.765: INFO: Pod downwardapi-volume-90a28267-cf1f-43c0-b158-5f8ffa0856a1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:13:36.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4011" for this suite.
Nov 27 22:13:42.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:13:42.957: INFO: namespace projected-4011 deletion completed in 6.184149668s

• [SLOW TEST:10.383 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:13:42.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Nov 27 22:13:43.050: INFO: PodSpec: initContainers in spec.initContainers
Nov 27 22:14:33.359: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b176d528-e64f-4e45-bcf1-1e02107a1e19", GenerateName:"", Namespace:"init-container-4270", SelfLink:"/api/v1/namespaces/init-container-4270/pods/pod-init-b176d528-e64f-4e45-bcf1-1e02107a1e19", UID:"118433c2-3d94-4013-9b86-d54f143520f6", ResourceVersion:"11926241", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63742112023, loc:(*time.Location)(0x792fa60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"49947698"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bzpgr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x4001d04040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bzpgr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bzpgr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bzpgr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4002186688), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40028f2000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4002186830)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x40021868c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x40021868c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x40021868cc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742112023, loc:(*time.Location)(0x792fa60)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742112023, loc:(*time.Location)(0x792fa60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742112023, loc:(*time.Location)(0x792fa60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63742112023, loc:(*time.Location)(0x792fa60)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"10.244.2.15", StartTime:(*v1.Time)(0x400169e0a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x400215c070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x400215c0e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d583b5daa95b44c7741fb571ba7103582c34a6c8596effa9937fcec4c148b76d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x400169e0e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x400169e0c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:14:33.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4270" for this suite.
Nov 27 22:14:55.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:14:55.579: INFO: namespace init-container-4270 deletion completed in 22.201584027s

• [SLOW TEST:72.618 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:14:55.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Nov 27 22:15:03.771: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 27 22:15:03.827: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 27 22:15:05.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 27 22:15:05.833: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 27 22:15:07.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 27 22:15:07.833: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 27 22:15:09.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 27 22:15:09.834: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 27 22:15:11.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 27 22:15:11.834: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 27 22:15:13.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 27 22:15:13.833: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 27 22:15:15.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 27 22:15:15.833: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 27 22:15:17.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 27 22:15:17.834: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 27 22:15:19.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 27 22:15:19.834: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 27 22:15:21.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 27 22:15:21.834: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 27 22:15:23.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 27 22:15:23.834: INFO: Pod pod-with-poststart-exec-hook still exists
Nov 27 22:15:25.827: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Nov 27 22:15:25.833: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:15:25.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7077" for this suite.
Nov 27 22:15:47.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:15:48.011: INFO: namespace container-lifecycle-hook-7077 deletion completed in 22.169106863s

• [SLOW TEST:52.429 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:15:48.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-82d81807-88ef-41d5-8c9d-66fd4efea066
STEP: Creating a pod to test consume configMaps
Nov 27 22:15:48.094: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dab4811a-8428-4652-94ee-5fa5f9913069" in namespace "projected-359" to be "success or failure"
Nov 27 22:15:48.138: INFO: Pod "pod-projected-configmaps-dab4811a-8428-4652-94ee-5fa5f9913069": Phase="Pending", Reason="", readiness=false. Elapsed: 43.77131ms
Nov 27 22:15:50.146: INFO: Pod "pod-projected-configmaps-dab4811a-8428-4652-94ee-5fa5f9913069": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050883293s
Nov 27 22:15:52.153: INFO: Pod "pod-projected-configmaps-dab4811a-8428-4652-94ee-5fa5f9913069": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058104933s
STEP: Saw pod success
Nov 27 22:15:52.153: INFO: Pod "pod-projected-configmaps-dab4811a-8428-4652-94ee-5fa5f9913069" satisfied condition "success or failure"
Nov 27 22:15:52.157: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-dab4811a-8428-4652-94ee-5fa5f9913069 container projected-configmap-volume-test: 
STEP: delete the pod
Nov 27 22:15:52.177: INFO: Waiting for pod pod-projected-configmaps-dab4811a-8428-4652-94ee-5fa5f9913069 to disappear
Nov 27 22:15:52.182: INFO: Pod pod-projected-configmaps-dab4811a-8428-4652-94ee-5fa5f9913069 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:15:52.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-359" for this suite.
Nov 27 22:15:58.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:15:58.416: INFO: namespace projected-359 deletion completed in 6.22656096s

• [SLOW TEST:10.403 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:15:58.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Nov 27 22:15:58.496: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Nov 27 22:15:58.519: INFO: Waiting for terminating namespaces to be deleted...
Nov 27 22:15:58.523: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Nov 27 22:15:58.537: INFO: kindnet-7bsvw from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Nov 27 22:15:58.537: INFO: 	Container kindnet-cni ready: true, restart count 0
Nov 27 22:15:58.537: INFO: chaos-controller-manager-6c68f56f79-dmwmx from default started at 2020-11-23 00:43:52 +0000 UTC (1 container statuses recorded)
Nov 27 22:15:58.538: INFO: 	Container chaos-mesh ready: true, restart count 0
Nov 27 22:15:58.538: INFO: chaos-daemon-m4wrh from default started at 2020-11-23 00:43:52 +0000 UTC (1 container statuses recorded)
Nov 27 22:15:58.538: INFO: 	Container chaos-daemon ready: true, restart count 0
Nov 27 22:15:58.538: INFO: kube-proxy-mtljr from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Nov 27 22:15:58.538: INFO: 	Container kube-proxy ready: true, restart count 0
Nov 27 22:15:58.538: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Nov 27 22:15:58.552: INFO: chaos-daemon-fcg7h from default started at 2020-11-23 00:43:52 +0000 UTC (1 container statuses recorded)
Nov 27 22:15:58.552: INFO: 	Container chaos-daemon ready: true, restart count 0
Nov 27 22:15:58.552: INFO: kindnet-djqgh from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Nov 27 22:15:58.552: INFO: 	Container kindnet-cni ready: true, restart count 0
Nov 27 22:15:58.552: INFO: kube-proxy-52wt5 from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Nov 27 22:15:58.553: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Nov 27 22:15:58.675: INFO: Pod chaos-controller-manager-6c68f56f79-dmwmx requesting resource cpu=25m on Node iruya-worker
Nov 27 22:15:58.675: INFO: Pod chaos-daemon-fcg7h requesting resource cpu=0m on Node iruya-worker2
Nov 27 22:15:58.675: INFO: Pod chaos-daemon-m4wrh requesting resource cpu=0m on Node iruya-worker
Nov 27 22:15:58.675: INFO: Pod kindnet-7bsvw requesting resource cpu=100m on Node iruya-worker
Nov 27 22:15:58.675: INFO: Pod kindnet-djqgh requesting resource cpu=100m on Node iruya-worker2
Nov 27 22:15:58.675: INFO: Pod kube-proxy-52wt5 requesting resource cpu=0m on Node iruya-worker2
Nov 27 22:15:58.675: INFO: Pod kube-proxy-mtljr requesting resource cpu=0m on Node iruya-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3ff100eb-3059-41bd-89bb-1e20cbfb1c80.164b7d350486bca5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7463/filler-pod-3ff100eb-3059-41bd-89bb-1e20cbfb1c80 to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3ff100eb-3059-41bd-89bb-1e20cbfb1c80.164b7d3598de91fe], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3ff100eb-3059-41bd-89bb-1e20cbfb1c80.164b7d35e09476c7], Reason = [Created], Message = [Created container filler-pod-3ff100eb-3059-41bd-89bb-1e20cbfb1c80]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-3ff100eb-3059-41bd-89bb-1e20cbfb1c80.164b7d35f06bab26], Reason = [Started], Message = [Started container filler-pod-3ff100eb-3059-41bd-89bb-1e20cbfb1c80]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fe4525d5-d5ac-47bb-9341-d497e72a80a6.164b7d3501b839d2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7463/filler-pod-fe4525d5-d5ac-47bb-9341-d497e72a80a6 to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fe4525d5-d5ac-47bb-9341-d497e72a80a6.164b7d355174ff73], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fe4525d5-d5ac-47bb-9341-d497e72a80a6.164b7d35bfc0685c], Reason = [Created], Message = [Created container filler-pod-fe4525d5-d5ac-47bb-9341-d497e72a80a6]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-fe4525d5-d5ac-47bb-9341-d497e72a80a6.164b7d35d9295230], Reason = [Started], Message = [Started container filler-pod-fe4525d5-d5ac-47bb-9341-d497e72a80a6]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.164b7d366cbd8ef0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:16:05.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7463" for this suite.
Nov 27 22:16:12.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:16:12.248: INFO: namespace sched-pred-7463 deletion completed in 6.3279764s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:13.832 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:16:12.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Nov 27 22:16:17.089: INFO: Successfully updated pod "pod-update-09361261-061d-4eab-a7bb-b8ffdb3f0a63"
STEP: verifying the updated pod is in kubernetes
Nov 27 22:16:17.116: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:16:17.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6103" for this suite.
Nov 27 22:16:39.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:16:39.338: INFO: namespace pods-6103 deletion completed in 22.183509449s

• [SLOW TEST:27.087 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:16:39.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:16:39.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8294" for this suite.
Nov 27 22:16:45.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:16:45.622: INFO: namespace services-8294 deletion completed in 6.168211903s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.281 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:16:45.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-629d3882-4a6c-4a97-899f-b1b0c3311207
STEP: Creating a pod to test consume configMaps
Nov 27 22:16:45.739: INFO: Waiting up to 5m0s for pod "pod-configmaps-edc1b7e3-8b3a-4be6-9c83-dfc295e3cd23" in namespace "configmap-2059" to be "success or failure"
Nov 27 22:16:45.775: INFO: Pod "pod-configmaps-edc1b7e3-8b3a-4be6-9c83-dfc295e3cd23": Phase="Pending", Reason="", readiness=false. Elapsed: 35.199132ms
Nov 27 22:16:47.783: INFO: Pod "pod-configmaps-edc1b7e3-8b3a-4be6-9c83-dfc295e3cd23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043088652s
Nov 27 22:16:49.790: INFO: Pod "pod-configmaps-edc1b7e3-8b3a-4be6-9c83-dfc295e3cd23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049997487s
STEP: Saw pod success
Nov 27 22:16:49.790: INFO: Pod "pod-configmaps-edc1b7e3-8b3a-4be6-9c83-dfc295e3cd23" satisfied condition "success or failure"
Nov 27 22:16:49.795: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-edc1b7e3-8b3a-4be6-9c83-dfc295e3cd23 container configmap-volume-test: 
STEP: delete the pod
Nov 27 22:16:49.887: INFO: Waiting for pod pod-configmaps-edc1b7e3-8b3a-4be6-9c83-dfc295e3cd23 to disappear
Nov 27 22:16:49.988: INFO: Pod pod-configmaps-edc1b7e3-8b3a-4be6-9c83-dfc295e3cd23 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:16:49.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2059" for this suite.
Nov 27 22:16:56.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:16:56.222: INFO: namespace configmap-2059 deletion completed in 6.214303594s

• [SLOW TEST:10.599 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:16:56.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 27 22:16:56.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04eb1c98-4fde-4db7-a423-b2143b6d7946" in namespace "downward-api-1339" to be "success or failure"
Nov 27 22:16:56.351: INFO: Pod "downwardapi-volume-04eb1c98-4fde-4db7-a423-b2143b6d7946": Phase="Pending", Reason="", readiness=false. Elapsed: 5.749802ms
Nov 27 22:16:58.359: INFO: Pod "downwardapi-volume-04eb1c98-4fde-4db7-a423-b2143b6d7946": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013591199s
Nov 27 22:17:00.367: INFO: Pod "downwardapi-volume-04eb1c98-4fde-4db7-a423-b2143b6d7946": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021595793s
STEP: Saw pod success
Nov 27 22:17:00.367: INFO: Pod "downwardapi-volume-04eb1c98-4fde-4db7-a423-b2143b6d7946" satisfied condition "success or failure"
Nov 27 22:17:00.372: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-04eb1c98-4fde-4db7-a423-b2143b6d7946 container client-container: 
STEP: delete the pod
Nov 27 22:17:00.400: INFO: Waiting for pod downwardapi-volume-04eb1c98-4fde-4db7-a423-b2143b6d7946 to disappear
Nov 27 22:17:00.418: INFO: Pod downwardapi-volume-04eb1c98-4fde-4db7-a423-b2143b6d7946 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:17:00.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1339" for this suite.
Nov 27 22:17:06.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:17:06.652: INFO: namespace downward-api-1339 deletion completed in 6.22570713s

• [SLOW TEST:10.426 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:17:06.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:17:13.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9297" for this suite.
Nov 27 22:17:19.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:17:19.305: INFO: namespace namespaces-9297 deletion completed in 6.179566829s
STEP: Destroying namespace "nsdeletetest-964" for this suite.
Nov 27 22:17:19.309: INFO: Namespace nsdeletetest-964 was already deleted
STEP: Destroying namespace "nsdeletetest-8360" for this suite.
Nov 27 22:17:25.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:17:25.498: INFO: namespace nsdeletetest-8360 deletion completed in 6.188315023s

• [SLOW TEST:18.845 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:17:25.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Nov 27 22:17:25.622: INFO: Waiting up to 5m0s for pod "var-expansion-0a0de92b-1087-4d06-b49f-ad95b7fc4145" in namespace "var-expansion-7022" to be "success or failure"
Nov 27 22:17:25.653: INFO: Pod "var-expansion-0a0de92b-1087-4d06-b49f-ad95b7fc4145": Phase="Pending", Reason="", readiness=false. Elapsed: 30.706499ms
Nov 27 22:17:27.660: INFO: Pod "var-expansion-0a0de92b-1087-4d06-b49f-ad95b7fc4145": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037131913s
Nov 27 22:17:29.671: INFO: Pod "var-expansion-0a0de92b-1087-4d06-b49f-ad95b7fc4145": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048563467s
STEP: Saw pod success
Nov 27 22:17:29.671: INFO: Pod "var-expansion-0a0de92b-1087-4d06-b49f-ad95b7fc4145" satisfied condition "success or failure"
Nov 27 22:17:29.676: INFO: Trying to get logs from node iruya-worker pod var-expansion-0a0de92b-1087-4d06-b49f-ad95b7fc4145 container dapi-container: 
STEP: delete the pod
Nov 27 22:17:29.694: INFO: Waiting for pod var-expansion-0a0de92b-1087-4d06-b49f-ad95b7fc4145 to disappear
Nov 27 22:17:29.698: INFO: Pod var-expansion-0a0de92b-1087-4d06-b49f-ad95b7fc4145 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:17:29.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7022" for this suite.
Nov 27 22:17:35.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:17:35.888: INFO: namespace var-expansion-7022 deletion completed in 6.181986985s

• [SLOW TEST:10.388 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:17:35.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 27 22:17:35.947: INFO: Creating ReplicaSet my-hostname-basic-d90f29f1-4078-4079-99c5-60954ac0d891
Nov 27 22:17:36.006: INFO: Pod name my-hostname-basic-d90f29f1-4078-4079-99c5-60954ac0d891: Found 1 pods out of 1
Nov 27 22:17:36.007: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d90f29f1-4078-4079-99c5-60954ac0d891" is running
Nov 27 22:17:40.034: INFO: Pod "my-hostname-basic-d90f29f1-4078-4079-99c5-60954ac0d891-rk4vz" is running (conditions: [])
Nov 27 22:17:40.034: INFO: Trying to dial the pod
Nov 27 22:17:45.052: INFO: Controller my-hostname-basic-d90f29f1-4078-4079-99c5-60954ac0d891: Got expected result from replica 1 [my-hostname-basic-d90f29f1-4078-4079-99c5-60954ac0d891-rk4vz]: "my-hostname-basic-d90f29f1-4078-4079-99c5-60954ac0d891-rk4vz", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:17:45.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6031" for this suite.
Nov 27 22:17:51.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:17:51.269: INFO: namespace replicaset-6031 deletion completed in 6.207418009s

• [SLOW TEST:15.379 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:17:51.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Nov 27 22:17:51.380: INFO: Waiting up to 5m0s for pod "pod-04099dd1-8e3e-4a24-bb06-54d516ee5ce5" in namespace "emptydir-4998" to be "success or failure"
Nov 27 22:17:51.388: INFO: Pod "pod-04099dd1-8e3e-4a24-bb06-54d516ee5ce5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.750096ms
Nov 27 22:17:53.546: INFO: Pod "pod-04099dd1-8e3e-4a24-bb06-54d516ee5ce5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166577827s
Nov 27 22:17:55.555: INFO: Pod "pod-04099dd1-8e3e-4a24-bb06-54d516ee5ce5": Phase="Running", Reason="", readiness=true. Elapsed: 4.175311172s
Nov 27 22:17:57.563: INFO: Pod "pod-04099dd1-8e3e-4a24-bb06-54d516ee5ce5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.183084975s
STEP: Saw pod success
Nov 27 22:17:57.563: INFO: Pod "pod-04099dd1-8e3e-4a24-bb06-54d516ee5ce5" satisfied condition "success or failure"
Nov 27 22:17:57.568: INFO: Trying to get logs from node iruya-worker pod pod-04099dd1-8e3e-4a24-bb06-54d516ee5ce5 container test-container: 
STEP: delete the pod
Nov 27 22:17:57.593: INFO: Waiting for pod pod-04099dd1-8e3e-4a24-bb06-54d516ee5ce5 to disappear
Nov 27 22:17:57.649: INFO: Pod pod-04099dd1-8e3e-4a24-bb06-54d516ee5ce5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:17:57.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4998" for this suite.
Nov 27 22:18:03.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:18:03.884: INFO: namespace emptydir-4998 deletion completed in 6.226701607s

• [SLOW TEST:12.613 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:18:03.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7834.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7834.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7834.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7834.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7834.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7834.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7834.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7834.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7834.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7834.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7834.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.32.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.32.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.32.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.32.246_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7834.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7834.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7834.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7834.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7834.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7834.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7834.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7834.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7834.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7834.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7834.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.32.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.32.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.32.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.32.246_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 27 22:18:10.161: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:10.165: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:10.197: INFO: Unable to read jessie_tcp@dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:10.202: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:10.206: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:10.232: INFO: Lookups using dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local jessie_tcp@dns-test-service.dns-7834.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local]

Nov 27 22:18:15.250: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:15.256: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:15.288: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:15.292: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:15.316: INFO: Lookups using dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local]

Nov 27 22:18:20.252: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:20.256: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:20.302: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:20.314: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:20.334: INFO: Lookups using dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local]

Nov 27 22:18:25.256: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:25.260: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:25.319: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:25.323: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:25.350: INFO: Lookups using dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local]

Nov 27 22:18:30.251: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:30.255: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:30.293: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:30.297: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:30.342: INFO: Lookups using dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local]

Nov 27 22:18:35.259: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:35.262: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:35.294: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:35.298: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local from pod dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5: the server could not find the requested resource (get pods dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5)
Nov 27 22:18:35.322: INFO: Lookups using dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7834.svc.cluster.local]

Nov 27 22:18:40.323: INFO: DNS probes using dns-7834/dns-test-bdeb30a1-1ed0-4dd1-859e-7ac80ffb87f5 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:18:40.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7834" for this suite.
Nov 27 22:18:46.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:18:47.247: INFO: namespace dns-7834 deletion completed in 6.266264044s

• [SLOW TEST:43.361 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:18:47.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 27 22:18:47.306: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b562be0d-8330-48e6-a0e3-c20ba0268fca" in namespace "downward-api-5852" to be "success or failure"
Nov 27 22:18:47.341: INFO: Pod "downwardapi-volume-b562be0d-8330-48e6-a0e3-c20ba0268fca": Phase="Pending", Reason="", readiness=false. Elapsed: 34.517375ms
Nov 27 22:18:49.559: INFO: Pod "downwardapi-volume-b562be0d-8330-48e6-a0e3-c20ba0268fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252298359s
Nov 27 22:18:51.564: INFO: Pod "downwardapi-volume-b562be0d-8330-48e6-a0e3-c20ba0268fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.257203668s
STEP: Saw pod success
Nov 27 22:18:51.564: INFO: Pod "downwardapi-volume-b562be0d-8330-48e6-a0e3-c20ba0268fca" satisfied condition "success or failure"
Nov 27 22:18:51.579: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b562be0d-8330-48e6-a0e3-c20ba0268fca container client-container: 
STEP: delete the pod
Nov 27 22:18:51.656: INFO: Waiting for pod downwardapi-volume-b562be0d-8330-48e6-a0e3-c20ba0268fca to disappear
Nov 27 22:18:51.666: INFO: Pod downwardapi-volume-b562be0d-8330-48e6-a0e3-c20ba0268fca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:18:51.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5852" for this suite.
Nov 27 22:18:57.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:18:57.847: INFO: namespace downward-api-5852 deletion completed in 6.172969051s

• [SLOW TEST:10.600 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:18:57.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-9638, will wait for the garbage collector to delete the pods
Nov 27 22:19:04.053: INFO: Deleting Job.batch foo took: 9.500446ms
Nov 27 22:19:04.353: INFO: Terminating Job.batch foo pods took: 300.798701ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:19:45.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9638" for this suite.
Nov 27 22:19:51.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:19:51.685: INFO: namespace job-9638 deletion completed in 6.214213556s

• [SLOW TEST:53.836 seconds]
[sig-apps] Job
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:19:51.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Nov 27 22:19:56.318: INFO: Successfully updated pod "labelsupdate4ecee3f1-1479-4949-82be-bc6b1dbc0326"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:20:00.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4296" for this suite.
Nov 27 22:20:22.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:20:22.555: INFO: namespace projected-4296 deletion completed in 22.185176556s

• [SLOW TEST:30.868 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:20:22.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Nov 27 22:20:22.669: INFO: Waiting up to 5m0s for pod "client-containers-508e3500-1925-44c4-a4a0-f98d5ca14316" in namespace "containers-9730" to be "success or failure"
Nov 27 22:20:22.684: INFO: Pod "client-containers-508e3500-1925-44c4-a4a0-f98d5ca14316": Phase="Pending", Reason="", readiness=false. Elapsed: 15.343289ms
Nov 27 22:20:24.691: INFO: Pod "client-containers-508e3500-1925-44c4-a4a0-f98d5ca14316": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022543569s
Nov 27 22:20:26.698: INFO: Pod "client-containers-508e3500-1925-44c4-a4a0-f98d5ca14316": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028795263s
STEP: Saw pod success
Nov 27 22:20:26.698: INFO: Pod "client-containers-508e3500-1925-44c4-a4a0-f98d5ca14316" satisfied condition "success or failure"
Nov 27 22:20:26.702: INFO: Trying to get logs from node iruya-worker pod client-containers-508e3500-1925-44c4-a4a0-f98d5ca14316 container test-container: 
STEP: delete the pod
Nov 27 22:20:26.829: INFO: Waiting for pod client-containers-508e3500-1925-44c4-a4a0-f98d5ca14316 to disappear
Nov 27 22:20:26.839: INFO: Pod client-containers-508e3500-1925-44c4-a4a0-f98d5ca14316 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:20:26.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9730" for this suite.
Nov 27 22:20:32.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:20:33.028: INFO: namespace containers-9730 deletion completed in 6.181950527s

• [SLOW TEST:10.472 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:20:33.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Nov 27 22:20:33.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9801 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Nov 27 22:20:37.307: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI1127 22:20:37.164935    2967 log.go:172] (0x4000676420) (0x40001be780) Create stream\nI1127 22:20:37.167188    2967 log.go:172] (0x4000676420) (0x40001be780) Stream added, broadcasting: 1\nI1127 22:20:37.179859    2967 log.go:172] (0x4000676420) Reply frame received for 1\nI1127 22:20:37.180766    2967 log.go:172] (0x4000676420) (0x40008ce000) Create stream\nI1127 22:20:37.180921    2967 log.go:172] (0x4000676420) (0x40008ce000) Stream added, broadcasting: 3\nI1127 22:20:37.182353    2967 log.go:172] (0x4000676420) Reply frame received for 3\nI1127 22:20:37.182663    2967 log.go:172] (0x4000676420) (0x40001be820) Create stream\nI1127 22:20:37.182734    2967 log.go:172] (0x4000676420) (0x40001be820) Stream added, broadcasting: 5\nI1127 22:20:37.184006    2967 log.go:172] (0x4000676420) Reply frame received for 5\nI1127 22:20:37.184261    2967 log.go:172] (0x4000676420) (0x4000638280) Create stream\nI1127 22:20:37.184322    2967 log.go:172] (0x4000676420) (0x4000638280) Stream added, broadcasting: 7\nI1127 22:20:37.185650    2967 log.go:172] (0x4000676420) Reply frame received for 7\nI1127 22:20:37.188582    2967 log.go:172] (0x40008ce000) (3) Writing data frame\nI1127 22:20:37.190089    2967 log.go:172] (0x40008ce000) (3) Writing data frame\nI1127 22:20:37.191248    2967 log.go:172] (0x4000676420) Data frame received for 5\nI1127 22:20:37.191448    2967 log.go:172] (0x40001be820) (5) Data frame handling\nI1127 22:20:37.191750    2967 log.go:172] (0x40001be820) (5) Data frame sent\nI1127 22:20:37.192158    2967 log.go:172] (0x4000676420) Data frame received for 5\nI1127 22:20:37.192244    2967 log.go:172] (0x40001be820) (5) Data frame handling\nI1127 22:20:37.192385    2967 log.go:172] (0x40001be820) (5) Data frame sent\nI1127 22:20:37.235217    2967 log.go:172] (0x4000676420) Data frame received for 7\nI1127 22:20:37.235873    2967 log.go:172] (0x4000638280) (7) Data frame handling\nI1127 22:20:37.239277    2967 log.go:172] (0x4000676420) Data frame received for 5\nI1127 22:20:37.239419    2967 log.go:172] (0x40001be820) (5) Data frame handling\nI1127 22:20:37.239696    2967 log.go:172] (0x4000676420) Data frame received for 1\nI1127 22:20:37.239915    2967 log.go:172] (0x40001be780) (1) Data frame handling\nI1127 22:20:37.242491    2967 log.go:172] (0x40001be780) (1) Data frame sent\nI1127 22:20:37.243008    2967 log.go:172] (0x4000676420) (0x40008ce000) Stream removed, broadcasting: 3\nI1127 22:20:37.244058    2967 log.go:172] (0x4000676420) (0x40001be780) Stream removed, broadcasting: 1\nI1127 22:20:37.244508    2967 log.go:172] (0x4000676420) Go away received\nI1127 22:20:37.246686    2967 log.go:172] (0x4000676420) (0x40001be780) Stream removed, broadcasting: 1\nI1127 22:20:37.246951    2967 log.go:172] (0x4000676420) (0x40008ce000) Stream removed, broadcasting: 3\nI1127 22:20:37.247658    2967 log.go:172] (0x4000676420) (0x40001be820) Stream removed, broadcasting: 5\nI1127 22:20:37.247820    2967 log.go:172] (0x4000676420) (0x4000638280) Stream removed, broadcasting: 7\n"
Nov 27 22:20:37.309: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:20:39.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9801" for this suite.
Nov 27 22:20:47.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:20:47.522: INFO: namespace kubectl-9801 deletion completed in 8.187584129s

• [SLOW TEST:14.490 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:20:47.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1127 22:20:58.919553       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Nov 27 22:20:58.920: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:20:58.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-79" for this suite.
Nov 27 22:21:06.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:21:07.120: INFO: namespace gc-79 deletion completed in 8.168787712s

• [SLOW TEST:19.597 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:21:07.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Nov 27 22:21:07.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-747'
Nov 27 22:21:08.923: INFO: stderr: ""
Nov 27 22:21:08.924: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Nov 27 22:21:08.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-747'
Nov 27 22:21:10.223: INFO: stderr: ""
Nov 27 22:21:10.223: INFO: stdout: "update-demo-nautilus-578zx update-demo-nautilus-jxxts "
Nov 27 22:21:10.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-578zx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-747'
Nov 27 22:21:11.478: INFO: stderr: ""
Nov 27 22:21:11.478: INFO: stdout: ""
Nov 27 22:21:11.478: INFO: update-demo-nautilus-578zx is created but not running
Nov 27 22:21:16.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-747'
Nov 27 22:21:17.816: INFO: stderr: ""
Nov 27 22:21:17.816: INFO: stdout: "update-demo-nautilus-578zx update-demo-nautilus-jxxts "
Nov 27 22:21:17.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-578zx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-747'
Nov 27 22:21:19.077: INFO: stderr: ""
Nov 27 22:21:19.078: INFO: stdout: "true"
Nov 27 22:21:19.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-578zx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-747'
Nov 27 22:21:20.383: INFO: stderr: ""
Nov 27 22:21:20.383: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 27 22:21:20.384: INFO: validating pod update-demo-nautilus-578zx
Nov 27 22:21:20.391: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 27 22:21:20.391: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 27 22:21:20.391: INFO: update-demo-nautilus-578zx is verified up and running
Nov 27 22:21:20.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxxts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-747'
Nov 27 22:21:21.682: INFO: stderr: ""
Nov 27 22:21:21.682: INFO: stdout: "true"
Nov 27 22:21:21.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxxts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-747'
Nov 27 22:21:22.982: INFO: stderr: ""
Nov 27 22:21:22.982: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 27 22:21:22.982: INFO: validating pod update-demo-nautilus-jxxts
Nov 27 22:21:22.988: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 27 22:21:22.989: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 27 22:21:22.989: INFO: update-demo-nautilus-jxxts is verified up and running
STEP: scaling down the replication controller
Nov 27 22:21:22.997: INFO: scanned /root for discovery docs: 
Nov 27 22:21:22.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-747'
Nov 27 22:21:24.367: INFO: stderr: ""
Nov 27 22:21:24.368: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Nov 27 22:21:24.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-747'
Nov 27 22:21:25.669: INFO: stderr: ""
Nov 27 22:21:25.670: INFO: stdout: "update-demo-nautilus-578zx update-demo-nautilus-jxxts "
STEP: Replicas for name=update-demo: expected=1 actual=2
Nov 27 22:21:30.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-747'
Nov 27 22:21:31.956: INFO: stderr: ""
Nov 27 22:21:31.956: INFO: stdout: "update-demo-nautilus-578zx update-demo-nautilus-jxxts "
STEP: Replicas for name=update-demo: expected=1 actual=2
Nov 27 22:21:36.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-747'
Nov 27 22:21:38.222: INFO: stderr: ""
Nov 27 22:21:38.222: INFO: stdout: "update-demo-nautilus-jxxts "
Nov 27 22:21:38.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxxts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-747'
Nov 27 22:21:39.486: INFO: stderr: ""
Nov 27 22:21:39.486: INFO: stdout: "true"
Nov 27 22:21:39.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxxts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-747'
Nov 27 22:21:40.757: INFO: stderr: ""
Nov 27 22:21:40.757: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 27 22:21:40.757: INFO: validating pod update-demo-nautilus-jxxts
Nov 27 22:21:40.767: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 27 22:21:40.768: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 27 22:21:40.768: INFO: update-demo-nautilus-jxxts is verified up and running
STEP: scaling up the replication controller
Nov 27 22:21:40.777: INFO: scanned /root for discovery docs: 
Nov 27 22:21:40.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-747'
Nov 27 22:21:43.233: INFO: stderr: ""
Nov 27 22:21:43.234: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Nov 27 22:21:43.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-747'
Nov 27 22:21:44.503: INFO: stderr: ""
Nov 27 22:21:44.503: INFO: stdout: "update-demo-nautilus-jxxts update-demo-nautilus-sp825 "
Nov 27 22:21:44.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxxts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-747'
Nov 27 22:21:45.804: INFO: stderr: ""
Nov 27 22:21:45.804: INFO: stdout: "true"
Nov 27 22:21:45.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxxts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-747'
Nov 27 22:21:47.055: INFO: stderr: ""
Nov 27 22:21:47.055: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 27 22:21:47.056: INFO: validating pod update-demo-nautilus-jxxts
Nov 27 22:21:47.061: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 27 22:21:47.061: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 27 22:21:47.061: INFO: update-demo-nautilus-jxxts is verified up and running
Nov 27 22:21:47.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sp825 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-747'
Nov 27 22:21:48.301: INFO: stderr: ""
Nov 27 22:21:48.301: INFO: stdout: "true"
Nov 27 22:21:48.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sp825 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-747'
Nov 27 22:21:49.589: INFO: stderr: ""
Nov 27 22:21:49.589: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 27 22:21:49.589: INFO: validating pod update-demo-nautilus-sp825
Nov 27 22:21:49.595: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 27 22:21:49.595: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 27 22:21:49.595: INFO: update-demo-nautilus-sp825 is verified up and running
STEP: using delete to clean up resources
Nov 27 22:21:49.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-747'
Nov 27 22:21:50.825: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 27 22:21:50.826: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Nov 27 22:21:50.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-747'
Nov 27 22:21:52.145: INFO: stderr: "No resources found.\n"
Nov 27 22:21:52.145: INFO: stdout: ""
Nov 27 22:21:52.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-747 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Nov 27 22:21:53.486: INFO: stderr: ""
Nov 27 22:21:53.486: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:21:53.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-747" for this suite.
Nov 27 22:21:59.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:21:59.688: INFO: namespace kubectl-747 deletion completed in 6.19331832s

• [SLOW TEST:52.562 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:21:59.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 27 22:21:59.835: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Nov 27 22:21:59.850: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:21:59.897: INFO: Number of nodes with available pods: 0
Nov 27 22:21:59.897: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:22:00.907: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:00.912: INFO: Number of nodes with available pods: 0
Nov 27 22:22:00.912: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:22:02.228: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:02.292: INFO: Number of nodes with available pods: 0
Nov 27 22:22:02.293: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:22:02.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:02.935: INFO: Number of nodes with available pods: 0
Nov 27 22:22:02.935: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:22:03.982: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:03.988: INFO: Number of nodes with available pods: 0
Nov 27 22:22:03.988: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:22:04.931: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:05.025: INFO: Number of nodes with available pods: 2
Nov 27 22:22:05.025: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Nov 27 22:22:05.076: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:05.076: INFO: Wrong image for pod: daemon-set-8skkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:05.093: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:06.100: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:06.101: INFO: Wrong image for pod: daemon-set-8skkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:06.111: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:07.100: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:07.100: INFO: Wrong image for pod: daemon-set-8skkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:07.106: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:08.100: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:08.100: INFO: Wrong image for pod: daemon-set-8skkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:08.108: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:09.102: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:09.102: INFO: Wrong image for pod: daemon-set-8skkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:09.102: INFO: Pod daemon-set-8skkg is not available
Nov 27 22:22:09.113: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:10.100: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:10.100: INFO: Wrong image for pod: daemon-set-8skkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:10.100: INFO: Pod daemon-set-8skkg is not available
Nov 27 22:22:10.110: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:11.102: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:11.102: INFO: Wrong image for pod: daemon-set-8skkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:11.102: INFO: Pod daemon-set-8skkg is not available
Nov 27 22:22:11.110: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:12.102: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:12.102: INFO: Wrong image for pod: daemon-set-8skkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:12.102: INFO: Pod daemon-set-8skkg is not available
Nov 27 22:22:12.108: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:13.101: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:13.101: INFO: Wrong image for pod: daemon-set-8skkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:13.101: INFO: Pod daemon-set-8skkg is not available
Nov 27 22:22:13.110: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:14.102: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:14.102: INFO: Wrong image for pod: daemon-set-8skkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:14.102: INFO: Pod daemon-set-8skkg is not available
Nov 27 22:22:14.111: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:15.102: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:15.102: INFO: Wrong image for pod: daemon-set-8skkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:15.102: INFO: Pod daemon-set-8skkg is not available
Nov 27 22:22:15.112: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:16.101: INFO: Pod daemon-set-4pww6 is not available
Nov 27 22:22:16.101: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:16.110: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:17.115: INFO: Pod daemon-set-4pww6 is not available
Nov 27 22:22:17.116: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:17.124: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:18.101: INFO: Pod daemon-set-4pww6 is not available
Nov 27 22:22:18.101: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:18.112: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:19.101: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:19.109: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:20.101: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:20.101: INFO: Pod daemon-set-5v92w is not available
Nov 27 22:22:20.107: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:21.102: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:21.102: INFO: Pod daemon-set-5v92w is not available
Nov 27 22:22:21.113: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:22.101: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:22.101: INFO: Pod daemon-set-5v92w is not available
Nov 27 22:22:22.110: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:23.099: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:23.099: INFO: Pod daemon-set-5v92w is not available
Nov 27 22:22:23.105: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:24.101: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:24.101: INFO: Pod daemon-set-5v92w is not available
Nov 27 22:22:24.109: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:25.102: INFO: Wrong image for pod: daemon-set-5v92w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Nov 27 22:22:25.102: INFO: Pod daemon-set-5v92w is not available
Nov 27 22:22:25.112: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:26.102: INFO: Pod daemon-set-hvvm9 is not available
Nov 27 22:22:26.112: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Nov 27 22:22:26.123: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:26.129: INFO: Number of nodes with available pods: 1
Nov 27 22:22:26.129: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:22:27.140: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:27.146: INFO: Number of nodes with available pods: 1
Nov 27 22:22:27.146: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:22:28.141: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 27 22:22:28.148: INFO: Number of nodes with available pods: 2
Nov 27 22:22:28.148: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6247, will wait for the garbage collector to delete the pods
Nov 27 22:22:28.234: INFO: Deleting DaemonSet.extensions daemon-set took: 7.541678ms
Nov 27 22:22:28.535: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.705927ms
Nov 27 22:22:35.751: INFO: Number of nodes with available pods: 0
Nov 27 22:22:35.751: INFO: Number of running nodes: 0, number of available pods: 0
Nov 27 22:22:35.754: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6247/daemonsets","resourceVersion":"11928062"},"items":null}

Nov 27 22:22:35.757: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6247/pods","resourceVersion":"11928062"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:22:35.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6247" for this suite.
Nov 27 22:22:41.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:22:42.001: INFO: namespace daemonsets-6247 deletion completed in 6.2187704s

• [SLOW TEST:42.307 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:22:42.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Nov 27 22:22:42.143: INFO: Waiting up to 5m0s for pod "pod-a0ecd190-6aef-4551-bf48-7a07a5af241d" in namespace "emptydir-7484" to be "success or failure"
Nov 27 22:22:42.160: INFO: Pod "pod-a0ecd190-6aef-4551-bf48-7a07a5af241d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.365091ms
Nov 27 22:22:44.167: INFO: Pod "pod-a0ecd190-6aef-4551-bf48-7a07a5af241d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023181231s
Nov 27 22:22:46.174: INFO: Pod "pod-a0ecd190-6aef-4551-bf48-7a07a5af241d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030295861s
STEP: Saw pod success
Nov 27 22:22:46.174: INFO: Pod "pod-a0ecd190-6aef-4551-bf48-7a07a5af241d" satisfied condition "success or failure"
Nov 27 22:22:46.180: INFO: Trying to get logs from node iruya-worker2 pod pod-a0ecd190-6aef-4551-bf48-7a07a5af241d container test-container: 
STEP: delete the pod
Nov 27 22:22:46.278: INFO: Waiting for pod pod-a0ecd190-6aef-4551-bf48-7a07a5af241d to disappear
Nov 27 22:22:46.298: INFO: Pod pod-a0ecd190-6aef-4551-bf48-7a07a5af241d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:22:46.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7484" for this suite.
Nov 27 22:22:52.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:22:52.478: INFO: namespace emptydir-7484 deletion completed in 6.171860374s

• [SLOW TEST:10.471 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:22:52.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Nov 27 22:22:57.148: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a0ea8cfc-15f9-429f-8122-0873a4cfd2c4"
Nov 27 22:22:57.148: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a0ea8cfc-15f9-429f-8122-0873a4cfd2c4" in namespace "pods-1699" to be "terminated due to deadline exceeded"
Nov 27 22:22:57.175: INFO: Pod "pod-update-activedeadlineseconds-a0ea8cfc-15f9-429f-8122-0873a4cfd2c4": Phase="Running", Reason="", readiness=true. Elapsed: 26.93051ms
Nov 27 22:22:59.182: INFO: Pod "pod-update-activedeadlineseconds-a0ea8cfc-15f9-429f-8122-0873a4cfd2c4": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.033573415s
Nov 27 22:22:59.182: INFO: Pod "pod-update-activedeadlineseconds-a0ea8cfc-15f9-429f-8122-0873a4cfd2c4" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:22:59.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1699" for this suite.
Nov 27 22:23:05.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:23:05.380: INFO: namespace pods-1699 deletion completed in 6.189301063s

• [SLOW TEST:12.896 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:23:05.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Nov 27 22:23:05.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8254'
Nov 27 22:23:09.787: INFO: stderr: ""
Nov 27 22:23:09.788: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Nov 27 22:23:10.796: INFO: Selector matched 1 pods for map[app:redis]
Nov 27 22:23:10.796: INFO: Found 0 / 1
Nov 27 22:23:11.796: INFO: Selector matched 1 pods for map[app:redis]
Nov 27 22:23:11.796: INFO: Found 0 / 1
Nov 27 22:23:12.795: INFO: Selector matched 1 pods for map[app:redis]
Nov 27 22:23:12.795: INFO: Found 1 / 1
Nov 27 22:23:12.796: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Nov 27 22:23:12.801: INFO: Selector matched 1 pods for map[app:redis]
Nov 27 22:23:12.801: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Nov 27 22:23:12.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-2xm2t --namespace=kubectl-8254 -p {"metadata":{"annotations":{"x":"y"}}}'
Nov 27 22:23:14.090: INFO: stderr: ""
Nov 27 22:23:14.090: INFO: stdout: "pod/redis-master-2xm2t patched\n"
STEP: checking annotations
Nov 27 22:23:14.103: INFO: Selector matched 1 pods for map[app:redis]
Nov 27 22:23:14.104: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:23:14.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8254" for this suite.
Nov 27 22:23:36.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:23:36.303: INFO: namespace kubectl-8254 deletion completed in 22.191783229s

• [SLOW TEST:30.920 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:23:36.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-31edd2f4-a226-49cb-a326-4af04b03d6f1
STEP: Creating a pod to test consume secrets
Nov 27 22:23:36.392: INFO: Waiting up to 5m0s for pod "pod-secrets-f91f3003-5b0d-4340-b76c-9d24ef4c8ea5" in namespace "secrets-1421" to be "success or failure"
Nov 27 22:23:36.439: INFO: Pod "pod-secrets-f91f3003-5b0d-4340-b76c-9d24ef4c8ea5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.742141ms
Nov 27 22:23:38.444: INFO: Pod "pod-secrets-f91f3003-5b0d-4340-b76c-9d24ef4c8ea5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052504224s
Nov 27 22:23:40.467: INFO: Pod "pod-secrets-f91f3003-5b0d-4340-b76c-9d24ef4c8ea5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074948321s
STEP: Saw pod success
Nov 27 22:23:40.467: INFO: Pod "pod-secrets-f91f3003-5b0d-4340-b76c-9d24ef4c8ea5" satisfied condition "success or failure"
Nov 27 22:23:40.471: INFO: Trying to get logs from node iruya-worker pod pod-secrets-f91f3003-5b0d-4340-b76c-9d24ef4c8ea5 container secret-volume-test: 
STEP: delete the pod
Nov 27 22:23:40.523: INFO: Waiting for pod pod-secrets-f91f3003-5b0d-4340-b76c-9d24ef4c8ea5 to disappear
Nov 27 22:23:40.542: INFO: Pod pod-secrets-f91f3003-5b0d-4340-b76c-9d24ef4c8ea5 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:23:40.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1421" for this suite.
Nov 27 22:23:46.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:23:46.795: INFO: namespace secrets-1421 deletion completed in 6.244730205s

• [SLOW TEST:10.490 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:23:46.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-468/secret-test-dc23f349-39f4-4eee-8508-b7cd0e5b7c00
STEP: Creating a pod to test consume secrets
Nov 27 22:23:46.931: INFO: Waiting up to 5m0s for pod "pod-configmaps-9266c19e-aad0-4159-a857-89ceb87fa041" in namespace "secrets-468" to be "success or failure"
Nov 27 22:23:46.938: INFO: Pod "pod-configmaps-9266c19e-aad0-4159-a857-89ceb87fa041": Phase="Pending", Reason="", readiness=false. Elapsed: 6.644781ms
Nov 27 22:23:48.944: INFO: Pod "pod-configmaps-9266c19e-aad0-4159-a857-89ceb87fa041": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012901391s
Nov 27 22:23:50.951: INFO: Pod "pod-configmaps-9266c19e-aad0-4159-a857-89ceb87fa041": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019644135s
STEP: Saw pod success
Nov 27 22:23:50.951: INFO: Pod "pod-configmaps-9266c19e-aad0-4159-a857-89ceb87fa041" satisfied condition "success or failure"
Nov 27 22:23:50.956: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-9266c19e-aad0-4159-a857-89ceb87fa041 container env-test: 
STEP: delete the pod
Nov 27 22:23:51.030: INFO: Waiting for pod pod-configmaps-9266c19e-aad0-4159-a857-89ceb87fa041 to disappear
Nov 27 22:23:51.105: INFO: Pod pod-configmaps-9266c19e-aad0-4159-a857-89ceb87fa041 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:23:51.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-468" for this suite.
Nov 27 22:23:57.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:23:57.328: INFO: namespace secrets-468 deletion completed in 6.214852434s

• [SLOW TEST:10.529 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:23:57.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 27 22:23:57.407: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a53664f5-4824-4517-8595-2ae7fb9b664f" in namespace "projected-5159" to be "success or failure"
Nov 27 22:23:57.473: INFO: Pod "downwardapi-volume-a53664f5-4824-4517-8595-2ae7fb9b664f": Phase="Pending", Reason="", readiness=false. Elapsed: 66.527328ms
Nov 27 22:23:59.545: INFO: Pod "downwardapi-volume-a53664f5-4824-4517-8595-2ae7fb9b664f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138076638s
Nov 27 22:24:01.552: INFO: Pod "downwardapi-volume-a53664f5-4824-4517-8595-2ae7fb9b664f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.14526364s
STEP: Saw pod success
Nov 27 22:24:01.553: INFO: Pod "downwardapi-volume-a53664f5-4824-4517-8595-2ae7fb9b664f" satisfied condition "success or failure"
Nov 27 22:24:01.558: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a53664f5-4824-4517-8595-2ae7fb9b664f container client-container: 
STEP: delete the pod
Nov 27 22:24:01.583: INFO: Waiting for pod downwardapi-volume-a53664f5-4824-4517-8595-2ae7fb9b664f to disappear
Nov 27 22:24:01.586: INFO: Pod downwardapi-volume-a53664f5-4824-4517-8595-2ae7fb9b664f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:24:01.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5159" for this suite.
Nov 27 22:24:07.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:24:07.862: INFO: namespace projected-5159 deletion completed in 6.268216644s

• [SLOW TEST:10.533 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:24:07.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Nov 27 22:24:12.547: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4854 pod-service-account-6f697226-8059-46bd-82b2-f9191f483cfa -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Nov 27 22:24:14.047: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4854 pod-service-account-6f697226-8059-46bd-82b2-f9191f483cfa -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Nov 27 22:24:15.524: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4854 pod-service-account-6f697226-8059-46bd-82b2-f9191f483cfa -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:24:17.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4854" for this suite.
Nov 27 22:24:23.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:24:23.262: INFO: namespace svcaccounts-4854 deletion completed in 6.182110011s

• [SLOW TEST:15.399 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:24:23.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Nov 27 22:24:23.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Nov 27 22:24:24.616: INFO: stderr: ""
Nov 27 22:24:24.616: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37711\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37711/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:24:24.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8607" for this suite.
Nov 27 22:24:30.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:24:30.828: INFO: namespace kubectl-8607 deletion completed in 6.202504923s

• [SLOW TEST:7.564 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:24:30.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 27 22:24:30.919: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Nov 27 22:24:32.018: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:24:32.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1343" for this suite.
Nov 27 22:24:38.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:24:38.357: INFO: namespace replication-controller-1343 deletion completed in 6.275061648s

• [SLOW TEST:7.527 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:24:38.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b6395dd5-7290-4790-90bc-f3f21b81837b
STEP: Creating a pod to test consume secrets
Nov 27 22:24:38.870: INFO: Waiting up to 5m0s for pod "pod-secrets-103ed612-0844-4ca6-ae1d-39c6e6c0a475" in namespace "secrets-1613" to be "success or failure"
Nov 27 22:24:38.907: INFO: Pod "pod-secrets-103ed612-0844-4ca6-ae1d-39c6e6c0a475": Phase="Pending", Reason="", readiness=false. Elapsed: 36.638373ms
Nov 27 22:24:41.006: INFO: Pod "pod-secrets-103ed612-0844-4ca6-ae1d-39c6e6c0a475": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13558848s
Nov 27 22:24:43.013: INFO: Pod "pod-secrets-103ed612-0844-4ca6-ae1d-39c6e6c0a475": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1421084s
STEP: Saw pod success
Nov 27 22:24:43.013: INFO: Pod "pod-secrets-103ed612-0844-4ca6-ae1d-39c6e6c0a475" satisfied condition "success or failure"
Nov 27 22:24:43.017: INFO: Trying to get logs from node iruya-worker pod pod-secrets-103ed612-0844-4ca6-ae1d-39c6e6c0a475 container secret-volume-test: 
STEP: delete the pod
Nov 27 22:24:43.037: INFO: Waiting for pod pod-secrets-103ed612-0844-4ca6-ae1d-39c6e6c0a475 to disappear
Nov 27 22:24:43.089: INFO: Pod pod-secrets-103ed612-0844-4ca6-ae1d-39c6e6c0a475 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:24:43.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1613" for this suite.
Nov 27 22:24:49.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:24:49.282: INFO: namespace secrets-1613 deletion completed in 6.181038475s
STEP: Destroying namespace "secret-namespace-9031" for this suite.
Nov 27 22:24:55.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:24:55.530: INFO: namespace secret-namespace-9031 deletion completed in 6.248271233s

• [SLOW TEST:17.172 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:24:55.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Nov 27 22:24:55.604: INFO: Waiting up to 5m0s for pod "pod-5930a480-16c5-4046-a7d1-c69b8c62a68b" in namespace "emptydir-8801" to be "success or failure"
Nov 27 22:24:55.666: INFO: Pod "pod-5930a480-16c5-4046-a7d1-c69b8c62a68b": Phase="Pending", Reason="", readiness=false. Elapsed: 62.394425ms
Nov 27 22:24:57.673: INFO: Pod "pod-5930a480-16c5-4046-a7d1-c69b8c62a68b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068908355s
Nov 27 22:24:59.679: INFO: Pod "pod-5930a480-16c5-4046-a7d1-c69b8c62a68b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074929044s
STEP: Saw pod success
Nov 27 22:24:59.679: INFO: Pod "pod-5930a480-16c5-4046-a7d1-c69b8c62a68b" satisfied condition "success or failure"
Nov 27 22:24:59.686: INFO: Trying to get logs from node iruya-worker2 pod pod-5930a480-16c5-4046-a7d1-c69b8c62a68b container test-container: 
STEP: delete the pod
Nov 27 22:24:59.732: INFO: Waiting for pod pod-5930a480-16c5-4046-a7d1-c69b8c62a68b to disappear
Nov 27 22:24:59.744: INFO: Pod pod-5930a480-16c5-4046-a7d1-c69b8c62a68b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:24:59.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8801" for this suite.
Nov 27 22:25:05.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:25:05.941: INFO: namespace emptydir-8801 deletion completed in 6.189417379s

• [SLOW TEST:10.410 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:25:05.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Nov 27 22:25:06.132: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2024,SelfLink:/api/v1/namespaces/watch-2024/configmaps/e2e-watch-test-resource-version,UID:5c4b26da-e2bf-4d01-bbe4-8741d2f91e52,ResourceVersion:11928708,Generation:0,CreationTimestamp:2020-11-27 22:25:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Nov 27 22:25:06.134: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2024,SelfLink:/api/v1/namespaces/watch-2024/configmaps/e2e-watch-test-resource-version,UID:5c4b26da-e2bf-4d01-bbe4-8741d2f91e52,ResourceVersion:11928709,Generation:0,CreationTimestamp:2020-11-27 22:25:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:25:06.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2024" for this suite.
Nov 27 22:25:12.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:25:12.324: INFO: namespace watch-2024 deletion completed in 6.180476883s

• [SLOW TEST:6.382 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:25:12.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-5bb52f90-ffc7-4255-ad11-7ca9475bb11f
STEP: Creating a pod to test consume configMaps
Nov 27 22:25:12.436: INFO: Waiting up to 5m0s for pod "pod-configmaps-a903ed06-edbb-4d73-9125-85d7c105c2ce" in namespace "configmap-1395" to be "success or failure"
Nov 27 22:25:12.441: INFO: Pod "pod-configmaps-a903ed06-edbb-4d73-9125-85d7c105c2ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.596148ms
Nov 27 22:25:14.448: INFO: Pod "pod-configmaps-a903ed06-edbb-4d73-9125-85d7c105c2ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01169314s
Nov 27 22:25:16.454: INFO: Pod "pod-configmaps-a903ed06-edbb-4d73-9125-85d7c105c2ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017703644s
STEP: Saw pod success
Nov 27 22:25:16.455: INFO: Pod "pod-configmaps-a903ed06-edbb-4d73-9125-85d7c105c2ce" satisfied condition "success or failure"
Nov 27 22:25:16.459: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a903ed06-edbb-4d73-9125-85d7c105c2ce container configmap-volume-test: 
STEP: delete the pod
Nov 27 22:25:16.701: INFO: Waiting for pod pod-configmaps-a903ed06-edbb-4d73-9125-85d7c105c2ce to disappear
Nov 27 22:25:16.706: INFO: Pod pod-configmaps-a903ed06-edbb-4d73-9125-85d7c105c2ce no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:25:16.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1395" for this suite.
Nov 27 22:25:22.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:25:22.886: INFO: namespace configmap-1395 deletion completed in 6.173112487s

• [SLOW TEST:10.554 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:25:22.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-3ef1609f-c6ee-46b0-83b0-ca964abf7825
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-3ef1609f-c6ee-46b0-83b0-ca964abf7825
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:26:41.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8293" for this suite.
Nov 27 22:27:03.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:27:03.745: INFO: namespace projected-8293 deletion completed in 22.20972857s

• [SLOW TEST:100.857 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:27:03.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5718
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Nov 27 22:27:03.826: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Nov 27 22:27:27.985: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.42:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5718 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 27 22:27:27.985: INFO: >>> kubeConfig: /root/.kube/config
I1127 22:27:28.046981       7 log.go:172] (0x4000adc8f0) (0x400259eb40) Create stream
I1127 22:27:28.047157       7 log.go:172] (0x4000adc8f0) (0x400259eb40) Stream added, broadcasting: 1
I1127 22:27:28.051215       7 log.go:172] (0x4000adc8f0) Reply frame received for 1
I1127 22:27:28.051419       7 log.go:172] (0x4000adc8f0) (0x40004481e0) Create stream
I1127 22:27:28.051522       7 log.go:172] (0x4000adc8f0) (0x40004481e0) Stream added, broadcasting: 3
I1127 22:27:28.053911       7 log.go:172] (0x4000adc8f0) Reply frame received for 3
I1127 22:27:28.054202       7 log.go:172] (0x4000adc8f0) (0x4000448320) Create stream
I1127 22:27:28.054321       7 log.go:172] (0x4000adc8f0) (0x4000448320) Stream added, broadcasting: 5
I1127 22:27:28.056238       7 log.go:172] (0x4000adc8f0) Reply frame received for 5
I1127 22:27:28.166416       7 log.go:172] (0x4000adc8f0) Data frame received for 3
I1127 22:27:28.166610       7 log.go:172] (0x40004481e0) (3) Data frame handling
I1127 22:27:28.166747       7 log.go:172] (0x4000adc8f0) Data frame received for 5
I1127 22:27:28.166938       7 log.go:172] (0x4000448320) (5) Data frame handling
I1127 22:27:28.167060       7 log.go:172] (0x40004481e0) (3) Data frame sent
I1127 22:27:28.167209       7 log.go:172] (0x4000adc8f0) Data frame received for 3
I1127 22:27:28.167289       7 log.go:172] (0x40004481e0) (3) Data frame handling
I1127 22:27:28.168213       7 log.go:172] (0x4000adc8f0) Data frame received for 1
I1127 22:27:28.168349       7 log.go:172] (0x400259eb40) (1) Data frame handling
I1127 22:27:28.168497       7 log.go:172] (0x400259eb40) (1) Data frame sent
I1127 22:27:28.168632       7 log.go:172] (0x4000adc8f0) (0x400259eb40) Stream removed, broadcasting: 1
I1127 22:27:28.168792       7 log.go:172] (0x4000adc8f0) Go away received
I1127 22:27:28.169339       7 log.go:172] (0x4000adc8f0) (0x400259eb40) Stream removed, broadcasting: 1
I1127 22:27:28.169519       7 log.go:172] (0x4000adc8f0) (0x40004481e0) Stream removed, broadcasting: 3
I1127 22:27:28.169637       7 log.go:172] (0x4000adc8f0) (0x4000448320) Stream removed, broadcasting: 5
Nov 27 22:27:28.169: INFO: Found all expected endpoints: [netserver-0]
Nov 27 22:27:28.175: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.113:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5718 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Nov 27 22:27:28.175: INFO: >>> kubeConfig: /root/.kube/config
I1127 22:27:28.237876       7 log.go:172] (0x400068dad0) (0x4002834500) Create stream
I1127 22:27:28.238061       7 log.go:172] (0x400068dad0) (0x4002834500) Stream added, broadcasting: 1
I1127 22:27:28.242397       7 log.go:172] (0x400068dad0) Reply frame received for 1
I1127 22:27:28.242603       7 log.go:172] (0x400068dad0) (0x4001899540) Create stream
I1127 22:27:28.242678       7 log.go:172] (0x400068dad0) (0x4001899540) Stream added, broadcasting: 3
I1127 22:27:28.244096       7 log.go:172] (0x400068dad0) Reply frame received for 3
I1127 22:27:28.244235       7 log.go:172] (0x400068dad0) (0x400259ebe0) Create stream
I1127 22:27:28.244307       7 log.go:172] (0x400068dad0) (0x400259ebe0) Stream added, broadcasting: 5
I1127 22:27:28.245825       7 log.go:172] (0x400068dad0) Reply frame received for 5
I1127 22:27:28.311062       7 log.go:172] (0x400068dad0) Data frame received for 3
I1127 22:27:28.311298       7 log.go:172] (0x4001899540) (3) Data frame handling
I1127 22:27:28.311431       7 log.go:172] (0x4001899540) (3) Data frame sent
I1127 22:27:28.311574       7 log.go:172] (0x400068dad0) Data frame received for 3
I1127 22:27:28.311694       7 log.go:172] (0x4001899540) (3) Data frame handling
I1127 22:27:28.311892       7 log.go:172] (0x400068dad0) Data frame received for 5
I1127 22:27:28.312064       7 log.go:172] (0x400259ebe0) (5) Data frame handling
I1127 22:27:28.312604       7 log.go:172] (0x400068dad0) Data frame received for 1
I1127 22:27:28.312735       7 log.go:172] (0x4002834500) (1) Data frame handling
I1127 22:27:28.313001       7 log.go:172] (0x4002834500) (1) Data frame sent
I1127 22:27:28.313144       7 log.go:172] (0x400068dad0) (0x4002834500) Stream removed, broadcasting: 1
I1127 22:27:28.313296       7 log.go:172] (0x400068dad0) Go away received
I1127 22:27:28.313801       7 log.go:172] (0x400068dad0) (0x4002834500) Stream removed, broadcasting: 1
I1127 22:27:28.313985       7 log.go:172] (0x400068dad0) (0x4001899540) Stream removed, broadcasting: 3
I1127 22:27:28.314126       7 log.go:172] (0x400068dad0) (0x400259ebe0) Stream removed, broadcasting: 5
Nov 27 22:27:28.314: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:27:28.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5718" for this suite.
Nov 27 22:27:52.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:27:52.501: INFO: namespace pod-network-test-5718 deletion completed in 24.178322585s

• [SLOW TEST:48.755 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:27:52.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Nov 27 22:27:57.664: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:27:57.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3349" for this suite.
Nov 27 22:28:19.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:28:19.926: INFO: namespace replicaset-3349 deletion completed in 22.215871317s

• [SLOW TEST:27.423 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:28:19.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 27 22:28:20.118: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"65fb50d4-4ca5-44eb-82b6-849a94ab354f", Controller:(*bool)(0x40030661ba), BlockOwnerDeletion:(*bool)(0x40030661bb)}}
Nov 27 22:28:20.175: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"364f9912-477e-4e98-bf5f-1d5510cc02fb", Controller:(*bool)(0x4001a3e532), BlockOwnerDeletion:(*bool)(0x4001a3e533)}}
Nov 27 22:28:20.183: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2159375b-fc7c-4ff2-a61e-1066095e7613", Controller:(*bool)(0x400306683a), BlockOwnerDeletion:(*bool)(0x400306683b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:28:25.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7392" for this suite.
Nov 27 22:28:31.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:28:31.496: INFO: namespace gc-7392 deletion completed in 6.223476481s

• [SLOW TEST:11.568 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:28:31.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 27 22:28:31.597: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Nov 27 22:28:44.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-6d3c2e8c-44d1-4b16-b593-20f0e691fee4 -c busybox-main-container --namespace=emptydir-7272 -- cat /usr/share/volumeshare/shareddata.txt'
Nov 27 22:28:45.530: INFO: stderr: "I1127 22:28:45.401994    3656 log.go:172] (0x40006346e0) (0x4000916a00) Create stream\nI1127 22:28:45.404366    3656 log.go:172] (0x40006346e0) (0x4000916a00) Stream added, broadcasting: 1\nI1127 22:28:45.423157    3656 log.go:172] (0x40006346e0) Reply frame received for 1\nI1127 22:28:45.424178    3656 log.go:172] (0x40006346e0) (0x4000932000) Create stream\nI1127 22:28:45.424262    3656 log.go:172] (0x40006346e0) (0x4000932000) Stream added, broadcasting: 3\nI1127 22:28:45.425731    3656 log.go:172] (0x40006346e0) Reply frame received for 3\nI1127 22:28:45.425946    3656 log.go:172] (0x40006346e0) (0x40009320a0) Create stream\nI1127 22:28:45.425998    3656 log.go:172] (0x40006346e0) (0x40009320a0) Stream added, broadcasting: 5\nI1127 22:28:45.427169    3656 log.go:172] (0x40006346e0) Reply frame received for 5\nI1127 22:28:45.509145    3656 log.go:172] (0x40006346e0) Data frame received for 3\nI1127 22:28:45.509387    3656 log.go:172] (0x40006346e0) Data frame received for 5\nI1127 22:28:45.509530    3656 log.go:172] (0x4000932000) (3) Data frame handling\nI1127 22:28:45.509760    3656 log.go:172] (0x40006346e0) Data frame received for 1\nI1127 22:28:45.509935    3656 log.go:172] (0x4000916a00) (1) Data frame handling\nI1127 22:28:45.510147    3656 log.go:172] (0x40009320a0) (5) Data frame handling\nI1127 22:28:45.510798    3656 log.go:172] (0x4000916a00) (1) Data frame sent\nI1127 22:28:45.511237    3656 log.go:172] (0x4000932000) (3) Data frame sent\nI1127 22:28:45.512642    3656 log.go:172] (0x40006346e0) Data frame received for 3\nI1127 22:28:45.512826    3656 log.go:172] (0x4000932000) (3) Data frame handling\nI1127 22:28:45.515359    3656 log.go:172] (0x40006346e0) (0x4000916a00) Stream removed, broadcasting: 1\nI1127 22:28:45.518622    3656 log.go:172] (0x40006346e0) Go away received\nI1127 22:28:45.520667    3656 log.go:172] (0x40006346e0) (0x4000916a00) Stream removed, broadcasting: 1\nI1127 22:28:45.521071    3656 log.go:172] (0x40006346e0) (0x4000932000) Stream removed, broadcasting: 3\nI1127 22:28:45.521465    3656 log.go:172] (0x40006346e0) (0x40009320a0) Stream removed, broadcasting: 5\n"
Nov 27 22:28:45.531: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:28:45.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7272" for this suite.
Nov 27 22:28:51.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:28:51.773: INFO: namespace emptydir-7272 deletion completed in 6.216759174s

• [SLOW TEST:13.885 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:28:51.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Nov 27 22:28:55.952: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Nov 27 22:29:02.153: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:29:02.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7128" for this suite.
Nov 27 22:29:08.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:29:08.391: INFO: namespace pods-7128 deletion completed in 6.218803506s

• [SLOW TEST:16.615 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:29:08.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-f9dbb792-f135-4df2-938e-29998d39f54b
STEP: Creating a pod to test consume secrets
Nov 27 22:29:08.539: INFO: Waiting up to 5m0s for pod "pod-secrets-129405e3-11b3-4d61-a2ed-a8f44ca9dd73" in namespace "secrets-9737" to be "success or failure"
Nov 27 22:29:08.555: INFO: Pod "pod-secrets-129405e3-11b3-4d61-a2ed-a8f44ca9dd73": Phase="Pending", Reason="", readiness=false. Elapsed: 16.473097ms
Nov 27 22:29:10.562: INFO: Pod "pod-secrets-129405e3-11b3-4d61-a2ed-a8f44ca9dd73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02299465s
Nov 27 22:29:12.569: INFO: Pod "pod-secrets-129405e3-11b3-4d61-a2ed-a8f44ca9dd73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030112427s
STEP: Saw pod success
Nov 27 22:29:12.569: INFO: Pod "pod-secrets-129405e3-11b3-4d61-a2ed-a8f44ca9dd73" satisfied condition "success or failure"
Nov 27 22:29:12.574: INFO: Trying to get logs from node iruya-worker pod pod-secrets-129405e3-11b3-4d61-a2ed-a8f44ca9dd73 container secret-volume-test: 
STEP: delete the pod
Nov 27 22:29:12.600: INFO: Waiting for pod pod-secrets-129405e3-11b3-4d61-a2ed-a8f44ca9dd73 to disappear
Nov 27 22:29:12.604: INFO: Pod pod-secrets-129405e3-11b3-4d61-a2ed-a8f44ca9dd73 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:29:12.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9737" for this suite.
Nov 27 22:29:18.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:29:18.775: INFO: namespace secrets-9737 deletion completed in 6.16137136s

• [SLOW TEST:10.383 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:29:18.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 27 22:29:18.871: INFO: Waiting up to 5m0s for pod "downwardapi-volume-289e4168-fef8-4dbe-b922-40a869993526" in namespace "downward-api-9987" to be "success or failure"
Nov 27 22:29:18.880: INFO: Pod "downwardapi-volume-289e4168-fef8-4dbe-b922-40a869993526": Phase="Pending", Reason="", readiness=false. Elapsed: 8.654428ms
Nov 27 22:29:20.887: INFO: Pod "downwardapi-volume-289e4168-fef8-4dbe-b922-40a869993526": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015543228s
Nov 27 22:29:22.895: INFO: Pod "downwardapi-volume-289e4168-fef8-4dbe-b922-40a869993526": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023373098s
STEP: Saw pod success
Nov 27 22:29:22.895: INFO: Pod "downwardapi-volume-289e4168-fef8-4dbe-b922-40a869993526" satisfied condition "success or failure"
Nov 27 22:29:22.900: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-289e4168-fef8-4dbe-b922-40a869993526 container client-container: 
STEP: delete the pod
Nov 27 22:29:22.924: INFO: Waiting for pod downwardapi-volume-289e4168-fef8-4dbe-b922-40a869993526 to disappear
Nov 27 22:29:22.945: INFO: Pod downwardapi-volume-289e4168-fef8-4dbe-b922-40a869993526 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:29:22.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9987" for this suite.
Nov 27 22:29:29.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:29:29.398: INFO: namespace downward-api-9987 deletion completed in 6.444467339s

• [SLOW TEST:10.622 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:29:29.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-784c6472-de3f-42a2-8a1c-617c640921f2
STEP: Creating a pod to test consume secrets
Nov 27 22:29:29.512: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-744484a3-4194-4cbe-a58f-e8b93e70e78a" in namespace "projected-1655" to be "success or failure"
Nov 27 22:29:29.572: INFO: Pod "pod-projected-secrets-744484a3-4194-4cbe-a58f-e8b93e70e78a": Phase="Pending", Reason="", readiness=false. Elapsed: 59.551853ms
Nov 27 22:29:31.580: INFO: Pod "pod-projected-secrets-744484a3-4194-4cbe-a58f-e8b93e70e78a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067700115s
Nov 27 22:29:33.714: INFO: Pod "pod-projected-secrets-744484a3-4194-4cbe-a58f-e8b93e70e78a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.201976819s
STEP: Saw pod success
Nov 27 22:29:33.714: INFO: Pod "pod-projected-secrets-744484a3-4194-4cbe-a58f-e8b93e70e78a" satisfied condition "success or failure"
Nov 27 22:29:33.718: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-744484a3-4194-4cbe-a58f-e8b93e70e78a container projected-secret-volume-test: 
STEP: delete the pod
Nov 27 22:29:33.806: INFO: Waiting for pod pod-projected-secrets-744484a3-4194-4cbe-a58f-e8b93e70e78a to disappear
Nov 27 22:29:33.826: INFO: Pod pod-projected-secrets-744484a3-4194-4cbe-a58f-e8b93e70e78a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:29:33.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1655" for this suite.
Nov 27 22:29:39.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:29:40.071: INFO: namespace projected-1655 deletion completed in 6.238711063s

• [SLOW TEST:10.671 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:29:40.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 27 22:29:40.148: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:29:40.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4486" for this suite.
Nov 27 22:29:46.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:29:46.959: INFO: namespace custom-resource-definition-4486 deletion completed in 6.194879221s

• [SLOW TEST:6.886 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:29:46.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:29:52.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9859" for this suite.
Nov 27 22:30:14.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:30:14.312: INFO: namespace replication-controller-9859 deletion completed in 22.197893203s

• [SLOW TEST:27.352 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:30:14.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Nov 27 22:30:22.477: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Nov 27 22:30:22.500: INFO: Pod pod-with-prestop-http-hook still exists
Nov 27 22:30:24.501: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Nov 27 22:30:24.507: INFO: Pod pod-with-prestop-http-hook still exists
Nov 27 22:30:26.501: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Nov 27 22:30:26.508: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:30:26.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-150" for this suite.
Nov 27 22:30:48.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:30:48.707: INFO: namespace container-lifecycle-hook-150 deletion completed in 22.181252268s

• [SLOW TEST:34.394 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:30:48.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Nov 27 22:30:48.775: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:30:56.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-990" for this suite.
Nov 27 22:31:02.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:31:02.796: INFO: namespace init-container-990 deletion completed in 6.184292586s

• [SLOW TEST:14.088 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:31:02.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-7833e1b4-3d3b-4d28-99ab-47aabe0faa62
STEP: Creating a pod to test consume configMaps
Nov 27 22:31:02.898: INFO: Waiting up to 5m0s for pod "pod-configmaps-7dbaae4e-e5cf-47c2-a434-b6967527f808" in namespace "configmap-4901" to be "success or failure"
Nov 27 22:31:02.930: INFO: Pod "pod-configmaps-7dbaae4e-e5cf-47c2-a434-b6967527f808": Phase="Pending", Reason="", readiness=false. Elapsed: 32.397234ms
Nov 27 22:31:04.936: INFO: Pod "pod-configmaps-7dbaae4e-e5cf-47c2-a434-b6967527f808": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038229649s
Nov 27 22:31:06.943: INFO: Pod "pod-configmaps-7dbaae4e-e5cf-47c2-a434-b6967527f808": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04542468s
STEP: Saw pod success
Nov 27 22:31:06.944: INFO: Pod "pod-configmaps-7dbaae4e-e5cf-47c2-a434-b6967527f808" satisfied condition "success or failure"
Nov 27 22:31:06.948: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7dbaae4e-e5cf-47c2-a434-b6967527f808 container configmap-volume-test: 
STEP: delete the pod
Nov 27 22:31:06.972: INFO: Waiting for pod pod-configmaps-7dbaae4e-e5cf-47c2-a434-b6967527f808 to disappear
Nov 27 22:31:06.982: INFO: Pod pod-configmaps-7dbaae4e-e5cf-47c2-a434-b6967527f808 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:31:06.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4901" for this suite.
Nov 27 22:31:13.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:31:13.247: INFO: namespace configmap-4901 deletion completed in 6.213982733s

• [SLOW TEST:10.450 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:31:13.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 27 22:31:13.345: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Nov 27 22:31:13.360: INFO: Number of nodes with available pods: 0
Nov 27 22:31:13.360: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Nov 27 22:31:13.451: INFO: Number of nodes with available pods: 0
Nov 27 22:31:13.452: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:14.460: INFO: Number of nodes with available pods: 0
Nov 27 22:31:14.460: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:15.459: INFO: Number of nodes with available pods: 0
Nov 27 22:31:15.459: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:16.459: INFO: Number of nodes with available pods: 0
Nov 27 22:31:16.459: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:17.459: INFO: Number of nodes with available pods: 1
Nov 27 22:31:17.459: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Nov 27 22:31:17.509: INFO: Number of nodes with available pods: 1
Nov 27 22:31:17.509: INFO: Number of running nodes: 0, number of available pods: 1
Nov 27 22:31:18.516: INFO: Number of nodes with available pods: 0
Nov 27 22:31:18.517: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Nov 27 22:31:18.535: INFO: Number of nodes with available pods: 0
Nov 27 22:31:18.535: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:19.542: INFO: Number of nodes with available pods: 0
Nov 27 22:31:19.542: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:20.544: INFO: Number of nodes with available pods: 0
Nov 27 22:31:20.544: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:21.543: INFO: Number of nodes with available pods: 0
Nov 27 22:31:21.543: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:22.543: INFO: Number of nodes with available pods: 0
Nov 27 22:31:22.543: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:23.543: INFO: Number of nodes with available pods: 0
Nov 27 22:31:23.543: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:24.543: INFO: Number of nodes with available pods: 0
Nov 27 22:31:24.543: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:25.543: INFO: Number of nodes with available pods: 0
Nov 27 22:31:25.543: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:26.543: INFO: Number of nodes with available pods: 0
Nov 27 22:31:26.543: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:27.553: INFO: Number of nodes with available pods: 0
Nov 27 22:31:27.553: INFO: Node iruya-worker is running more than one daemon pod
Nov 27 22:31:28.543: INFO: Number of nodes with available pods: 1
Nov 27 22:31:28.543: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8060, will wait for the garbage collector to delete the pods
Nov 27 22:31:28.617: INFO: Deleting DaemonSet.extensions daemon-set took: 8.72745ms
Nov 27 22:31:28.918: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.912641ms
Nov 27 22:31:35.424: INFO: Number of nodes with available pods: 0
Nov 27 22:31:35.424: INFO: Number of running nodes: 0, number of available pods: 0
Nov 27 22:31:35.429: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8060/daemonsets","resourceVersion":"11930031"},"items":null}

Nov 27 22:31:35.449: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8060/pods","resourceVersion":"11930031"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:31:35.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8060" for this suite.
Nov 27 22:31:41.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:31:41.682: INFO: namespace daemonsets-8060 deletion completed in 6.183749142s

• [SLOW TEST:28.432 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:31:41.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Nov 27 22:31:41.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5329'
Nov 27 22:31:43.406: INFO: stderr: ""
Nov 27 22:31:43.406: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Nov 27 22:31:43.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5329'
Nov 27 22:31:44.822: INFO: stderr: ""
Nov 27 22:31:44.822: INFO: stdout: "update-demo-nautilus-4c267 update-demo-nautilus-tkthq "
Nov 27 22:31:44.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c267 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5329'
Nov 27 22:31:46.058: INFO: stderr: ""
Nov 27 22:31:46.058: INFO: stdout: ""
Nov 27 22:31:46.059: INFO: update-demo-nautilus-4c267 is created but not running
Nov 27 22:31:51.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5329'
Nov 27 22:31:52.360: INFO: stderr: ""
Nov 27 22:31:52.361: INFO: stdout: "update-demo-nautilus-4c267 update-demo-nautilus-tkthq "
Nov 27 22:31:52.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c267 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5329'
Nov 27 22:31:53.614: INFO: stderr: ""
Nov 27 22:31:53.615: INFO: stdout: "true"
Nov 27 22:31:53.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4c267 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5329'
Nov 27 22:31:54.882: INFO: stderr: ""
Nov 27 22:31:54.883: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 27 22:31:54.883: INFO: validating pod update-demo-nautilus-4c267
Nov 27 22:31:54.889: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 27 22:31:54.889: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 27 22:31:54.889: INFO: update-demo-nautilus-4c267 is verified up and running
Nov 27 22:31:54.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkthq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5329'
Nov 27 22:31:56.180: INFO: stderr: ""
Nov 27 22:31:56.180: INFO: stdout: "true"
Nov 27 22:31:56.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkthq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5329'
Nov 27 22:31:57.476: INFO: stderr: ""
Nov 27 22:31:57.476: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 27 22:31:57.476: INFO: validating pod update-demo-nautilus-tkthq
Nov 27 22:31:57.482: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 27 22:31:57.482: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 27 22:31:57.482: INFO: update-demo-nautilus-tkthq is verified up and running
STEP: using delete to clean up resources
Nov 27 22:31:57.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5329'
Nov 27 22:31:58.756: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 27 22:31:58.756: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Nov 27 22:31:58.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5329'
Nov 27 22:32:00.046: INFO: stderr: "No resources found.\n"
Nov 27 22:32:00.046: INFO: stdout: ""
Nov 27 22:32:00.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5329 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Nov 27 22:32:01.410: INFO: stderr: ""
Nov 27 22:32:01.410: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:32:01.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5329" for this suite.
Nov 27 22:32:07.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:32:07.596: INFO: namespace kubectl-5329 deletion completed in 6.175560822s

• [SLOW TEST:25.914 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:32:07.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Nov 27 22:32:08.214: INFO: created pod pod-service-account-defaultsa
Nov 27 22:32:08.214: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Nov 27 22:32:08.239: INFO: created pod pod-service-account-mountsa
Nov 27 22:32:08.239: INFO: pod pod-service-account-mountsa service account token volume mount: true
Nov 27 22:32:08.247: INFO: created pod pod-service-account-nomountsa
Nov 27 22:32:08.247: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Nov 27 22:32:08.284: INFO: created pod pod-service-account-defaultsa-mountspec
Nov 27 22:32:08.284: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Nov 27 22:32:08.348: INFO: created pod pod-service-account-mountsa-mountspec
Nov 27 22:32:08.349: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Nov 27 22:32:08.360: INFO: created pod pod-service-account-nomountsa-mountspec
Nov 27 22:32:08.360: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Nov 27 22:32:08.402: INFO: created pod pod-service-account-defaultsa-nomountspec
Nov 27 22:32:08.402: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Nov 27 22:32:08.426: INFO: created pod pod-service-account-mountsa-nomountspec
Nov 27 22:32:08.426: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Nov 27 22:32:08.474: INFO: created pod pod-service-account-nomountsa-nomountspec
Nov 27 22:32:08.474: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:32:08.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3847" for this suite.
Nov 27 22:32:38.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:32:38.783: INFO: namespace svcaccounts-3847 deletion completed in 30.265248176s

• [SLOW TEST:31.185 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 27 22:32:38.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Nov 27 22:32:38.860: INFO: Waiting up to 5m0s for pod "pod-da67c5d2-fc51-4409-8fca-7efad77c3eb2" in namespace "emptydir-6698" to be "success or failure"
Nov 27 22:32:38.865: INFO: Pod "pod-da67c5d2-fc51-4409-8fca-7efad77c3eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.568645ms
Nov 27 22:32:40.871: INFO: Pod "pod-da67c5d2-fc51-4409-8fca-7efad77c3eb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011320613s
Nov 27 22:32:42.878: INFO: Pod "pod-da67c5d2-fc51-4409-8fca-7efad77c3eb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018084331s
STEP: Saw pod success
Nov 27 22:32:42.878: INFO: Pod "pod-da67c5d2-fc51-4409-8fca-7efad77c3eb2" satisfied condition "success or failure"
Nov 27 22:32:42.883: INFO: Trying to get logs from node iruya-worker pod pod-da67c5d2-fc51-4409-8fca-7efad77c3eb2 container test-container: 
STEP: delete the pod
Nov 27 22:32:42.908: INFO: Waiting for pod pod-da67c5d2-fc51-4409-8fca-7efad77c3eb2 to disappear
Nov 27 22:32:42.911: INFO: Pod pod-da67c5d2-fc51-4409-8fca-7efad77c3eb2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 27 22:32:42.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6698" for this suite.
Nov 27 22:32:48.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 27 22:32:49.132: INFO: namespace emptydir-6698 deletion completed in 6.213080891s

• [SLOW TEST:10.346 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSNov 27 22:32:49.134: INFO: Running AfterSuite actions on all nodes
Nov 27 22:32:49.136: INFO: Running AfterSuite actions on node 1
Nov 27 22:32:49.136: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 6280.136 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS