I0429 19:04:29.843233 6 e2e.go:243] Starting e2e run "6a1282c3-4820-436f-8b1c-254f0c01e35d" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1619723068 - Will randomize all specs Will run 215 of 4413 specs Apr 29 19:04:30.052: INFO: >>> kubeConfig: /root/.kube/config Apr 29 19:04:30.054: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 29 19:04:30.077: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 29 19:04:30.111: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 29 19:04:30.111: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 29 19:04:30.111: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 29 19:04:30.119: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 29 19:04:30.119: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 29 19:04:30.119: INFO: e2e test version: v1.15.12 Apr 29 19:04:30.120: INFO: kube-apiserver version: v1.15.12 SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:04:30.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 29 19:04:30.206: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 29 19:04:34.347: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:04:34.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1175" for this suite. Apr 29 19:04:40.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:04:40.501: INFO: namespace container-runtime-1175 deletion completed in 6.127395146s • [SLOW TEST:10.380 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:04:40.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 29 19:04:41.110: INFO: created pod pod-service-account-defaultsa Apr 29 19:04:41.110: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 29 19:04:41.116: INFO: created pod pod-service-account-mountsa Apr 29 19:04:41.116: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 29 19:04:41.142: INFO: created pod pod-service-account-nomountsa Apr 29 19:04:41.142: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 29 19:04:41.175: INFO: created pod pod-service-account-defaultsa-mountspec Apr 29 19:04:41.175: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 29 19:04:41.196: INFO: created pod pod-service-account-mountsa-mountspec Apr 29 19:04:41.196: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 29 19:04:41.274: INFO: created pod pod-service-account-nomountsa-mountspec Apr 29 19:04:41.274: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 29 19:04:41.312: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 29 19:04:41.312: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 29 19:04:41.410: INFO: created pod pod-service-account-mountsa-nomountspec Apr 29 19:04:41.410: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 29 19:04:41.462: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 29 19:04:41.462: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:04:41.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4363" for this suite. Apr 29 19:05:09.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:05:09.718: INFO: namespace svcaccounts-4363 deletion completed in 28.226649607s • [SLOW TEST:29.218 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:05:09.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 29 19:05:09.775: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 29 19:05:09.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2470' Apr 29 19:05:12.595: INFO: stderr: "" Apr 29 19:05:12.595: INFO: stdout: "service/redis-slave created\n" Apr 29 19:05:12.596: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 29 19:05:12.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2470' Apr 29 19:05:12.893: INFO: stderr: "" Apr 29 19:05:12.893: INFO: stdout: "service/redis-master created\n" Apr 29 19:05:12.894: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 29 19:05:12.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2470' Apr 29 19:05:13.221: INFO: stderr: "" Apr 29 19:05:13.221: INFO: stdout: "service/frontend created\n" Apr 29 19:05:13.221: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 29 19:05:13.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2470' Apr 29 19:05:13.470: INFO: stderr: "" Apr 29 19:05:13.470: INFO: stdout: "deployment.apps/frontend created\n" Apr 29 19:05:13.470: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 29 19:05:13.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2470' Apr 29 19:05:13.793: INFO: stderr: "" Apr 29 19:05:13.793: INFO: stdout: "deployment.apps/redis-master created\n" Apr 29 19:05:13.793: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 29 19:05:13.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2470' Apr 29 19:05:14.087: INFO: stderr: "" Apr 29 19:05:14.087: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 29 19:05:14.087: INFO: Waiting for all frontend pods to be Running. Apr 29 19:05:24.138: INFO: Waiting for frontend to serve content. Apr 29 19:05:24.167: INFO: Trying to add a new entry to the guestbook. Apr 29 19:05:24.177: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 29 19:05:24.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2470' Apr 29 19:05:24.318: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 19:05:24.318: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 29 19:05:24.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2470' Apr 29 19:05:24.444: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 19:05:24.444: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 29 19:05:24.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2470' Apr 29 19:05:24.577: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 19:05:24.577: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 29 19:05:24.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2470' Apr 29 19:05:24.678: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 19:05:24.678: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 29 19:05:24.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2470' Apr 29 19:05:24.774: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 19:05:24.774: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 29 19:05:24.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2470' Apr 29 19:05:24.882: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 29 19:05:24.882: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:05:24.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2470" for this suite. Apr 29 19:06:10.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:06:10.989: INFO: namespace kubectl-2470 deletion completed in 46.103470375s • [SLOW TEST:61.270 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:06:10.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 29 19:06:11.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8775' Apr 29 19:06:11.288: INFO: stderr: "" Apr 29 19:06:11.288: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 29 19:06:11.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8775' Apr 29 19:06:11.606: INFO: stderr: "" Apr 29 19:06:11.606: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 29 19:06:12.611: INFO: Selector matched 1 pods for map[app:redis] Apr 29 19:06:12.611: INFO: Found 0 / 1 Apr 29 19:06:13.611: INFO: Selector matched 1 pods for map[app:redis] Apr 29 19:06:13.611: INFO: Found 0 / 1 Apr 29 19:06:14.611: INFO: Selector matched 1 pods for map[app:redis] Apr 29 19:06:14.611: INFO: Found 0 / 1 Apr 29 19:06:15.610: INFO: Selector matched 1 pods for map[app:redis] Apr 29 19:06:15.610: INFO: Found 1 / 1 Apr 29 19:06:15.610: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 29 19:06:15.614: INFO: Selector matched 1 pods for map[app:redis] Apr 29 19:06:15.614: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 29 19:06:15.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-b8b26 --namespace=kubectl-8775' Apr 29 19:06:15.728: INFO: stderr: "" Apr 29 19:06:15.728: INFO: stdout: "Name: redis-master-b8b26\nNamespace: kubectl-8775\nPriority: 0\nNode: iruya-worker2/172.18.0.4\nStart Time: Thu, 29 Apr 2021 19:06:11 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.31\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://ada2985b8ed5686224e5de15651fb9d18013e7acc73c2a049235d0eeff49657a\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 29 Apr 2021 19:06:14 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-w5r6p (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-w5r6p:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-w5r6p\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-8775/redis-master-b8b26 to iruya-worker2\n Normal Pulled 3s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" Apr 29 19:06:15.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-8775' Apr 29 19:06:15.852: INFO: stderr: "" Apr 29 19:06:15.852: INFO: stdout: "Name: redis-master\nNamespace: kubectl-8775\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-b8b26\n" Apr 29 19:06:15.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-8775' Apr 29 19:06:15.951: INFO: stderr: "" Apr 29 19:06:15.951: INFO: stdout: "Name: redis-master\nNamespace: kubectl-8775\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.96.184.181\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.31:6379\nSession Affinity: None\nEvents: \n" Apr 29 19:06:15.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 29 19:06:16.094: INFO: stderr: "" Apr 29 19:06:16.094: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 13 Apr 2021 08:08:26 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 29 Apr 2021 19:05:49 +0000 Tue, 13 Apr 2021 08:08:25 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 29 Apr 2021 19:05:49 +0000 Tue, 13 Apr 2021 08:08:25 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 29 Apr 2021 19:05:49 +0000 Tue, 13 Apr 2021 08:08:25 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 29 Apr 2021 19:05:49 +0000 Tue, 13 Apr 2021 08:08:56 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.5\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759824Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759824Ki\n pods: 110\nSystem Info:\n Machine ID: a3f1bf480bee4ba1be0d7febdcd2e8d2\n System UUID: 10a84bce-4959-48c9-a590-36d45dfcec7d\n Boot ID: dc0058b1-aa97-45b0-baf9-d3a69a0326a3\n Kernel Version: 4.15.0-141-generic\n OS Image: Ubuntu 20.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-106-gce4439a8\n Kubelet Version: v1.15.12\n Kube-Proxy Version: v1.15.12\nPodCIDR: 10.244.0.0/24\nProviderID: kind://docker/iruya/iruya-control-plane\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-5d4dd4b4db-jpgqt 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 16d\n kube-system coredns-5d4dd4b4db-vvtjr 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 16d\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system kindnet-vqf27 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 16d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system kube-proxy-hr9lp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 16d\n local-path-storage local-path-provisioner-7f465859dc-kvv5n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 29 19:06:16.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8775' Apr 29 19:06:16.206: INFO: stderr: "" Apr 29 19:06:16.206: INFO: stdout: "Name: kubectl-8775\nLabels: e2e-framework=kubectl\n e2e-run=6a1282c3-4820-436f-8b1c-254f0c01e35d\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:06:16.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8775" for this suite. Apr 29 19:06:38.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:06:38.318: INFO: namespace kubectl-8775 deletion completed in 22.108206336s • [SLOW TEST:27.328 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:06:38.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 29 19:06:38.379: INFO: Waiting up to 5m0s for pod "client-containers-74bdfdae-523d-4316-8ad5-9ebde48693df" in namespace "containers-5" to be "success or failure" Apr 29 19:06:38.389: INFO: Pod "client-containers-74bdfdae-523d-4316-8ad5-9ebde48693df": Phase="Pending", Reason="", readiness=false. Elapsed: 10.244496ms Apr 29 19:06:40.447: INFO: Pod "client-containers-74bdfdae-523d-4316-8ad5-9ebde48693df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06787338s Apr 29 19:06:42.450: INFO: Pod "client-containers-74bdfdae-523d-4316-8ad5-9ebde48693df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071317654s STEP: Saw pod success Apr 29 19:06:42.450: INFO: Pod "client-containers-74bdfdae-523d-4316-8ad5-9ebde48693df" satisfied condition "success or failure" Apr 29 19:06:42.453: INFO: Trying to get logs from node iruya-worker pod client-containers-74bdfdae-523d-4316-8ad5-9ebde48693df container test-container: STEP: delete the pod Apr 29 19:06:42.474: INFO: Waiting for pod client-containers-74bdfdae-523d-4316-8ad5-9ebde48693df to disappear Apr 29 19:06:42.484: INFO: Pod client-containers-74bdfdae-523d-4316-8ad5-9ebde48693df no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:06:42.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5" for this suite. Apr 29 19:06:48.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:06:48.617: INFO: namespace containers-5 deletion completed in 6.129494265s • [SLOW TEST:10.299 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:06:48.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-c0e4a9c2-5083-4763-a592-e359add0ea59 STEP: Creating a pod to test consume secrets Apr 29 19:06:48.690: INFO: Waiting up to 5m0s for pod "pod-secrets-ba691e50-7c98-4927-8458-9266b1ff4d03" in namespace "secrets-8869" to be "success or failure" Apr 29 19:06:48.700: INFO: Pod "pod-secrets-ba691e50-7c98-4927-8458-9266b1ff4d03": Phase="Pending", Reason="", readiness=false. Elapsed: 9.909515ms Apr 29 19:06:50.705: INFO: Pod "pod-secrets-ba691e50-7c98-4927-8458-9266b1ff4d03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014194222s Apr 29 19:06:52.708: INFO: Pod "pod-secrets-ba691e50-7c98-4927-8458-9266b1ff4d03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017379826s STEP: Saw pod success Apr 29 19:06:52.708: INFO: Pod "pod-secrets-ba691e50-7c98-4927-8458-9266b1ff4d03" satisfied condition "success or failure" Apr 29 19:06:52.710: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-ba691e50-7c98-4927-8458-9266b1ff4d03 container secret-volume-test: STEP: delete the pod Apr 29 19:06:52.749: INFO: Waiting for pod pod-secrets-ba691e50-7c98-4927-8458-9266b1ff4d03 to disappear Apr 29 19:06:52.754: INFO: Pod pod-secrets-ba691e50-7c98-4927-8458-9266b1ff4d03 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:06:52.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8869" for this suite. Apr 29 19:06:58.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:06:58.871: INFO: namespace secrets-8869 deletion completed in 6.108348906s • [SLOW TEST:10.253 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:06:58.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 29 19:07:03.018: INFO: Waiting up to 5m0s for pod "client-envvars-6b7bb320-2a4a-4085-ab83-98db1abf1bde" in namespace "pods-4909" to be "success or failure" Apr 29 19:07:03.024: INFO: Pod "client-envvars-6b7bb320-2a4a-4085-ab83-98db1abf1bde": Phase="Pending", Reason="", readiness=false. Elapsed: 5.537718ms Apr 29 19:07:05.028: INFO: Pod "client-envvars-6b7bb320-2a4a-4085-ab83-98db1abf1bde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00930249s Apr 29 19:07:07.032: INFO: Pod "client-envvars-6b7bb320-2a4a-4085-ab83-98db1abf1bde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013878908s STEP: Saw pod success Apr 29 19:07:07.032: INFO: Pod "client-envvars-6b7bb320-2a4a-4085-ab83-98db1abf1bde" satisfied condition "success or failure" Apr 29 19:07:07.035: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-6b7bb320-2a4a-4085-ab83-98db1abf1bde container env3cont: STEP: delete the pod Apr 29 19:07:07.068: INFO: Waiting for pod client-envvars-6b7bb320-2a4a-4085-ab83-98db1abf1bde to disappear Apr 29 19:07:07.087: INFO: Pod client-envvars-6b7bb320-2a4a-4085-ab83-98db1abf1bde no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:07:07.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4909" for this suite. Apr 29 19:07:51.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:07:51.197: INFO: namespace pods-4909 deletion completed in 44.105544631s • [SLOW TEST:52.326 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:07:51.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-ab06a83f-5d6e-444e-81fb-0bdd33bd845d STEP: Creating a pod to test consume secrets Apr 29 19:07:51.279: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-595709dd-32ce-4536-adc8-e41a93f393a4" in namespace "projected-6279" to be "success or failure" Apr 29 19:07:51.283: INFO: Pod "pod-projected-secrets-595709dd-32ce-4536-adc8-e41a93f393a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181526ms Apr 29 19:07:53.287: INFO: Pod "pod-projected-secrets-595709dd-32ce-4536-adc8-e41a93f393a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008422674s Apr 29 19:07:55.292: INFO: Pod "pod-projected-secrets-595709dd-32ce-4536-adc8-e41a93f393a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012737852s STEP: Saw pod success Apr 29 19:07:55.292: INFO: Pod "pod-projected-secrets-595709dd-32ce-4536-adc8-e41a93f393a4" satisfied condition "success or failure" Apr 29 19:07:55.295: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-595709dd-32ce-4536-adc8-e41a93f393a4 container projected-secret-volume-test: STEP: delete the pod Apr 29 19:07:55.315: INFO: Waiting for pod pod-projected-secrets-595709dd-32ce-4536-adc8-e41a93f393a4 to disappear Apr 29 19:07:55.334: INFO: Pod pod-projected-secrets-595709dd-32ce-4536-adc8-e41a93f393a4 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:07:55.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6279" for this suite. Apr 29 19:08:01.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:08:01.445: INFO: namespace projected-6279 deletion completed in 6.107491598s • [SLOW TEST:10.248 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:08:01.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:08:05.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1718" for this suite. Apr 29 19:08:45.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:08:45.676: INFO: namespace kubelet-test-1718 deletion completed in 40.126709663s • [SLOW TEST:44.230 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:08:45.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 29 19:08:53.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:08:53.905: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:08:55.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:08:55.908: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:08:57.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:08:57.909: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:08:59.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:08:59.909: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:09:01.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:09:01.908: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:09:03.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:09:03.909: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:09:05.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:09:05.907: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:09:07.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:09:07.909: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:09:09.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:09:09.909: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:09:11.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:09:11.909: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:09:13.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:09:13.909: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:09:15.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:09:15.920: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:09:17.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:09:17.909: INFO: Pod pod-with-prestop-exec-hook still exists Apr 29 19:09:19.905: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 29 19:09:19.908: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:09:19.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6776" for this suite. Apr 29 19:09:41.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:09:42.072: INFO: namespace container-lifecycle-hook-6776 deletion completed in 22.154416406s • [SLOW TEST:56.395 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:09:42.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 29 19:09:42.171: INFO: Waiting up to 5m0s for pod "pod-f2cd8f67-b495-4dc3-baa7-80ef762cb944" in namespace "emptydir-117" to be "success or failure" Apr 29 19:09:42.179: INFO: Pod "pod-f2cd8f67-b495-4dc3-baa7-80ef762cb944": Phase="Pending", Reason="", readiness=false. Elapsed: 7.760097ms Apr 29 19:09:44.183: INFO: Pod "pod-f2cd8f67-b495-4dc3-baa7-80ef762cb944": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011499388s Apr 29 19:09:46.186: INFO: Pod "pod-f2cd8f67-b495-4dc3-baa7-80ef762cb944": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015176794s STEP: Saw pod success Apr 29 19:09:46.186: INFO: Pod "pod-f2cd8f67-b495-4dc3-baa7-80ef762cb944" satisfied condition "success or failure" Apr 29 19:09:46.189: INFO: Trying to get logs from node iruya-worker2 pod pod-f2cd8f67-b495-4dc3-baa7-80ef762cb944 container test-container: STEP: delete the pod Apr 29 19:09:46.223: INFO: Waiting for pod pod-f2cd8f67-b495-4dc3-baa7-80ef762cb944 to disappear Apr 29 19:09:46.239: INFO: Pod pod-f2cd8f67-b495-4dc3-baa7-80ef762cb944 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:09:46.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-117" for this suite. Apr 29 19:09:52.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:09:52.368: INFO: namespace emptydir-117 deletion completed in 6.126435813s • [SLOW TEST:10.296 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:09:52.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-128 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-128 to expose endpoints map[] Apr 29 19:09:52.483: INFO: Get endpoints failed (23.952678ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 29 19:09:53.487: INFO: successfully validated that service endpoint-test2 in namespace services-128 exposes endpoints map[] (1.02799925s elapsed) STEP: Creating pod pod1 in namespace services-128 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-128 to expose endpoints map[pod1:[80]] Apr 29 19:09:56.604: INFO: successfully validated that service endpoint-test2 in namespace services-128 exposes endpoints map[pod1:[80]] (3.109574664s elapsed) STEP: Creating pod pod2 in namespace services-128 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-128 to expose endpoints map[pod1:[80] pod2:[80]] Apr 29 19:10:00.664: INFO: successfully validated that service endpoint-test2 in namespace services-128 exposes endpoints map[pod1:[80] pod2:[80]] (4.055976252s elapsed) STEP: Deleting pod pod1 in namespace services-128 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-128 to expose endpoints map[pod2:[80]] Apr 29 19:10:01.688: INFO: successfully validated that service endpoint-test2 in namespace services-128 exposes endpoints map[pod2:[80]] (1.01920217s elapsed) STEP: Deleting pod pod2 in namespace services-128 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-128 to expose endpoints map[] Apr 29 19:10:02.719: INFO: successfully validated that service endpoint-test2 in namespace services-128 exposes endpoints map[] (1.025946592s elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:10:02.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-128" for this suite. Apr 29 19:10:24.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:10:24.856: INFO: namespace services-128 deletion completed in 22.108953113s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.487 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:10:24.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-1f3c2deb-b893-429c-a848-e92e0e2fac91 STEP: Creating a pod to test consume configMaps Apr 29 19:10:24.949: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba4584b2-d6bd-44ee-bb09-c81c3c1f0249" in namespace "configmap-844" to be "success or failure" Apr 29 19:10:24.968: INFO: Pod "pod-configmaps-ba4584b2-d6bd-44ee-bb09-c81c3c1f0249": Phase="Pending", Reason="", readiness=false. Elapsed: 18.843368ms Apr 29 19:10:26.972: INFO: Pod "pod-configmaps-ba4584b2-d6bd-44ee-bb09-c81c3c1f0249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02284638s Apr 29 19:10:28.976: INFO: Pod "pod-configmaps-ba4584b2-d6bd-44ee-bb09-c81c3c1f0249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026973853s STEP: Saw pod success Apr 29 19:10:28.976: INFO: Pod "pod-configmaps-ba4584b2-d6bd-44ee-bb09-c81c3c1f0249" satisfied condition "success or failure" Apr 29 19:10:28.979: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-ba4584b2-d6bd-44ee-bb09-c81c3c1f0249 container configmap-volume-test: STEP: delete the pod Apr 29 19:10:29.094: INFO: Waiting for pod pod-configmaps-ba4584b2-d6bd-44ee-bb09-c81c3c1f0249 to disappear Apr 29 19:10:29.103: INFO: Pod pod-configmaps-ba4584b2-d6bd-44ee-bb09-c81c3c1f0249 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:10:29.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-844" for this suite. Apr 29 19:10:35.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:10:35.215: INFO: namespace configmap-844 deletion completed in 6.108594416s • [SLOW TEST:10.358 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:10:35.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-421d0780-dd25-4e1d-b4d8-06bd47133dbc [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:10:35.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1878" for this suite. Apr 29 19:10:41.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:10:41.404: INFO: namespace secrets-1878 deletion completed in 6.105039051s • [SLOW TEST:6.188 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:10:41.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-f72ff016-35ae-4697-910c-b6866747c0a2 in namespace container-probe-4426 Apr 29 19:10:45.470: INFO: Started pod busybox-f72ff016-35ae-4697-910c-b6866747c0a2 in namespace container-probe-4426 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 19:10:45.473: INFO: Initial restart count of pod busybox-f72ff016-35ae-4697-910c-b6866747c0a2 is 0 Apr 29 19:11:33.588: INFO: Restart count of pod container-probe-4426/busybox-f72ff016-35ae-4697-910c-b6866747c0a2 is now 1 (48.114824043s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:11:33.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4426" for this suite. Apr 29 19:11:39.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:11:39.772: INFO: namespace container-probe-4426 deletion completed in 6.157667398s • [SLOW TEST:58.368 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:11:39.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-b369344d-096b-4b84-aa5c-d76714dee268 STEP: Creating a pod to test consume secrets Apr 29 19:11:39.850: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7068d6dc-8656-4c6e-a43a-21be1362dbcc" in namespace "projected-6080" to be "success or failure" Apr 29 19:11:39.862: INFO: Pod "pod-projected-secrets-7068d6dc-8656-4c6e-a43a-21be1362dbcc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.223296ms Apr 29 19:11:41.866: INFO: Pod "pod-projected-secrets-7068d6dc-8656-4c6e-a43a-21be1362dbcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015936607s Apr 29 19:11:43.870: INFO: Pod "pod-projected-secrets-7068d6dc-8656-4c6e-a43a-21be1362dbcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019769553s STEP: Saw pod success Apr 29 19:11:43.870: INFO: Pod "pod-projected-secrets-7068d6dc-8656-4c6e-a43a-21be1362dbcc" satisfied condition "success or failure" Apr 29 19:11:43.873: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-7068d6dc-8656-4c6e-a43a-21be1362dbcc container secret-volume-test: STEP: delete the pod Apr 29 19:11:43.887: INFO: Waiting for pod pod-projected-secrets-7068d6dc-8656-4c6e-a43a-21be1362dbcc to disappear Apr 29 19:11:43.898: INFO: Pod pod-projected-secrets-7068d6dc-8656-4c6e-a43a-21be1362dbcc no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:11:43.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6080" for this suite. Apr 29 19:11:50.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:11:50.226: INFO: namespace projected-6080 deletion completed in 6.325635025s • [SLOW TEST:10.453 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:11:50.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7314756b-4f56-43ca-9711-20dbb718dbbc STEP: Creating a pod to test consume secrets Apr 29 19:11:50.317: INFO: Waiting up to 5m0s for pod "pod-secrets-35ca0b06-8be0-40fe-8737-ef093fd6663d" in namespace "secrets-8627" to be "success or failure" Apr 29 19:11:50.324: INFO: Pod "pod-secrets-35ca0b06-8be0-40fe-8737-ef093fd6663d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.587683ms Apr 29 19:11:52.328: INFO: Pod "pod-secrets-35ca0b06-8be0-40fe-8737-ef093fd6663d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011011889s Apr 29 19:11:54.333: INFO: Pod "pod-secrets-35ca0b06-8be0-40fe-8737-ef093fd6663d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015423121s STEP: Saw pod success Apr 29 19:11:54.333: INFO: Pod "pod-secrets-35ca0b06-8be0-40fe-8737-ef093fd6663d" satisfied condition "success or failure" Apr 29 19:11:54.336: INFO: Trying to get logs from node iruya-worker pod pod-secrets-35ca0b06-8be0-40fe-8737-ef093fd6663d container secret-env-test: STEP: delete the pod Apr 29 19:11:54.449: INFO: Waiting for pod pod-secrets-35ca0b06-8be0-40fe-8737-ef093fd6663d to disappear Apr 29 19:11:54.455: INFO: Pod pod-secrets-35ca0b06-8be0-40fe-8737-ef093fd6663d no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:11:54.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8627" for this suite. Apr 29 19:12:00.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:12:00.601: INFO: namespace secrets-8627 deletion completed in 6.141821544s • [SLOW TEST:10.374 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:12:00.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 29 19:12:00.639: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a02f1c60-c08e-451d-94eb-8c7376f1114b" in namespace "projected-5273" to be "success or failure" Apr 29 19:12:00.681: INFO: Pod "downwardapi-volume-a02f1c60-c08e-451d-94eb-8c7376f1114b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.437107ms Apr 29 19:12:02.687: INFO: Pod "downwardapi-volume-a02f1c60-c08e-451d-94eb-8c7376f1114b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047614185s Apr 29 19:12:04.691: INFO: Pod "downwardapi-volume-a02f1c60-c08e-451d-94eb-8c7376f1114b": Phase="Running", Reason="", readiness=true. Elapsed: 4.051506359s Apr 29 19:12:06.695: INFO: Pod "downwardapi-volume-a02f1c60-c08e-451d-94eb-8c7376f1114b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056062631s STEP: Saw pod success Apr 29 19:12:06.695: INFO: Pod "downwardapi-volume-a02f1c60-c08e-451d-94eb-8c7376f1114b" satisfied condition "success or failure" Apr 29 19:12:06.699: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a02f1c60-c08e-451d-94eb-8c7376f1114b container client-container: STEP: delete the pod Apr 29 19:12:06.732: INFO: Waiting for pod downwardapi-volume-a02f1c60-c08e-451d-94eb-8c7376f1114b to disappear Apr 29 19:12:06.758: INFO: Pod downwardapi-volume-a02f1c60-c08e-451d-94eb-8c7376f1114b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:12:06.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5273" for this suite. Apr 29 19:12:12.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:12:12.895: INFO: namespace projected-5273 deletion completed in 6.133333746s • [SLOW TEST:12.294 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:12:12.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6272, will wait for the garbage collector to delete the pods Apr 29 19:12:17.073: INFO: Deleting Job.batch foo took: 5.237373ms Apr 29 19:12:17.373: INFO: Terminating Job.batch foo pods took: 300.229249ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:12:59.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6272" for this suite. Apr 29 19:13:05.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:13:05.377: INFO: namespace job-6272 deletion completed in 6.096565052s • [SLOW TEST:52.482 seconds] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:13:05.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 29 19:13:10.000: INFO: Successfully updated pod "labelsupdateeed06031-9d95-467c-8fef-13c1d7d7ca50" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:13:14.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5719" for this suite. Apr 29 19:13:36.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:13:36.180: INFO: namespace projected-5719 deletion completed in 22.123324787s • [SLOW TEST:30.803 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:13:36.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 29 19:13:36.228: INFO: namespace kubectl-8203 Apr 29 19:13:36.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8203' Apr 29 19:13:36.524: INFO: stderr: "" Apr 29 19:13:36.524: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 29 19:13:37.605: INFO: Selector matched 1 pods for map[app:redis] Apr 29 19:13:37.605: INFO: Found 0 / 1 Apr 29 19:13:38.528: INFO: Selector matched 1 pods for map[app:redis] Apr 29 19:13:38.528: INFO: Found 0 / 1 Apr 29 19:13:39.529: INFO: Selector matched 1 pods for map[app:redis] Apr 29 19:13:39.529: INFO: Found 0 / 1 Apr 29 19:13:40.529: INFO: Selector matched 1 pods for map[app:redis] Apr 29 19:13:40.529: INFO: Found 1 / 1 Apr 29 19:13:40.529: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 29 19:13:40.533: INFO: Selector matched 1 pods for map[app:redis] Apr 29 19:13:40.533: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 29 19:13:40.533: INFO: wait on redis-master startup in kubectl-8203 Apr 29 19:13:40.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mtfhf redis-master --namespace=kubectl-8203' Apr 29 19:13:40.648: INFO: stderr: "" Apr 29 19:13:40.648: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 29 Apr 19:13:39.414 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Apr 19:13:39.414 # Server started, Redis version 3.2.12\n1:M 29 Apr 19:13:39.414 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Apr 19:13:39.414 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 29 19:13:40.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8203' Apr 29 19:13:40.783: INFO: stderr: "" Apr 29 19:13:40.783: INFO: stdout: "service/rm2 exposed\n" Apr 29 19:13:40.839: INFO: Service rm2 in namespace kubectl-8203 found. STEP: exposing service Apr 29 19:13:42.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8203' Apr 29 19:13:42.994: INFO: stderr: "" Apr 29 19:13:42.994: INFO: stdout: "service/rm3 exposed\n" Apr 29 19:13:43.000: INFO: Service rm3 in namespace kubectl-8203 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:13:45.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8203" for this suite. Apr 29 19:14:07.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:14:07.124: INFO: namespace kubectl-8203 deletion completed in 22.111761063s • [SLOW TEST:30.943 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:14:07.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1154.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1154.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 29 19:14:13.362: INFO: DNS probes using dns-1154/dns-test-a90bae46-1ae9-4f05-ae81-14a4a3f5629f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:14:13.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1154" for this suite. Apr 29 19:14:19.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:14:19.557: INFO: namespace dns-1154 deletion completed in 6.12049883s • [SLOW TEST:12.433 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:14:19.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-4901/secret-test-618490cb-d5f8-4fcf-a3fb-68324a27f923 STEP: Creating a pod to test consume secrets Apr 29 19:14:19.650: INFO: Waiting up to 5m0s for pod "pod-configmaps-7155014e-a23c-468e-987e-0cde3ee63922" in namespace "secrets-4901" to be "success or failure" Apr 29 19:14:19.685: INFO: Pod "pod-configmaps-7155014e-a23c-468e-987e-0cde3ee63922": Phase="Pending", Reason="", readiness=false. Elapsed: 34.580431ms Apr 29 19:14:21.719: INFO: Pod "pod-configmaps-7155014e-a23c-468e-987e-0cde3ee63922": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069147105s Apr 29 19:14:23.792: INFO: Pod "pod-configmaps-7155014e-a23c-468e-987e-0cde3ee63922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141593438s STEP: Saw pod success Apr 29 19:14:23.792: INFO: Pod "pod-configmaps-7155014e-a23c-468e-987e-0cde3ee63922" satisfied condition "success or failure" Apr 29 19:14:23.795: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7155014e-a23c-468e-987e-0cde3ee63922 container env-test: STEP: delete the pod Apr 29 19:14:23.839: INFO: Waiting for pod pod-configmaps-7155014e-a23c-468e-987e-0cde3ee63922 to disappear Apr 29 19:14:23.851: INFO: Pod pod-configmaps-7155014e-a23c-468e-987e-0cde3ee63922 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:14:23.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4901" for this suite. Apr 29 19:14:29.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:14:29.970: INFO: namespace secrets-4901 deletion completed in 6.115194374s • [SLOW TEST:10.412 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:14:29.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 29 19:14:30.002: INFO: Waiting up to 5m0s for pod "pod-7fc63e2a-80b8-4ab1-850f-ef65af53f3be" in namespace "emptydir-6041" to be "success or failure" Apr 29 19:14:30.055: INFO: Pod "pod-7fc63e2a-80b8-4ab1-850f-ef65af53f3be": Phase="Pending", Reason="", readiness=false. Elapsed: 52.699695ms Apr 29 19:14:32.059: INFO: Pod "pod-7fc63e2a-80b8-4ab1-850f-ef65af53f3be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056836451s Apr 29 19:14:34.064: INFO: Pod "pod-7fc63e2a-80b8-4ab1-850f-ef65af53f3be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061449213s STEP: Saw pod success Apr 29 19:14:34.064: INFO: Pod "pod-7fc63e2a-80b8-4ab1-850f-ef65af53f3be" satisfied condition "success or failure" Apr 29 19:14:34.067: INFO: Trying to get logs from node iruya-worker pod pod-7fc63e2a-80b8-4ab1-850f-ef65af53f3be container test-container: STEP: delete the pod Apr 29 19:14:34.096: INFO: Waiting for pod pod-7fc63e2a-80b8-4ab1-850f-ef65af53f3be to disappear Apr 29 19:14:34.105: INFO: Pod pod-7fc63e2a-80b8-4ab1-850f-ef65af53f3be no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:14:34.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6041" for this suite. Apr 29 19:14:40.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:14:40.231: INFO: namespace emptydir-6041 deletion completed in 6.122238995s • [SLOW TEST:10.261 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:14:40.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 29 19:14:40.381: INFO: Waiting up to 5m0s for pod "pod-3e1c0f6a-31b3-4857-b49b-351cf71a6d6c" in namespace "emptydir-165" to be "success or failure" Apr 29 19:14:40.387: INFO: Pod "pod-3e1c0f6a-31b3-4857-b49b-351cf71a6d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414731ms Apr 29 19:14:42.392: INFO: Pod "pod-3e1c0f6a-31b3-4857-b49b-351cf71a6d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011092921s Apr 29 19:14:44.396: INFO: Pod "pod-3e1c0f6a-31b3-4857-b49b-351cf71a6d6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014925837s STEP: Saw pod success Apr 29 19:14:44.396: INFO: Pod "pod-3e1c0f6a-31b3-4857-b49b-351cf71a6d6c" satisfied condition "success or failure" Apr 29 19:14:44.398: INFO: Trying to get logs from node iruya-worker2 pod pod-3e1c0f6a-31b3-4857-b49b-351cf71a6d6c container test-container: STEP: delete the pod Apr 29 19:14:44.495: INFO: Waiting for pod pod-3e1c0f6a-31b3-4857-b49b-351cf71a6d6c to disappear Apr 29 19:14:44.525: INFO: Pod pod-3e1c0f6a-31b3-4857-b49b-351cf71a6d6c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:14:44.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-165" for this suite. Apr 29 19:14:50.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:14:50.631: INFO: namespace emptydir-165 deletion completed in 6.101930636s • [SLOW TEST:10.399 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:14:50.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 29 19:14:50.790: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:14:50.818: INFO: Number of nodes with available pods: 0 Apr 29 19:14:50.818: INFO: Node iruya-worker is running more than one daemon pod Apr 29 19:14:51.913: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:14:51.916: INFO: Number of nodes with available pods: 0 Apr 29 19:14:51.917: INFO: Node iruya-worker is running more than one daemon pod Apr 29 19:14:52.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:14:52.966: INFO: Number of nodes with available pods: 0 Apr 29 19:14:52.966: INFO: Node iruya-worker is running more than one daemon pod Apr 29 19:14:53.823: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:14:53.826: INFO: Number of nodes with available pods: 0 Apr 29 19:14:53.826: INFO: Node iruya-worker is running more than one daemon pod Apr 29 19:14:54.823: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:14:54.826: INFO: Number of nodes with available pods: 0 Apr 29 19:14:54.826: INFO: Node iruya-worker is running more than one daemon pod Apr 29 19:14:55.824: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:14:55.827: INFO: Number of nodes with available pods: 2 Apr 29 19:14:55.828: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 29 19:14:55.847: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:14:55.849: INFO: Number of nodes with available pods: 1 Apr 29 19:14:55.849: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:14:56.854: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:14:56.858: INFO: Number of nodes with available pods: 1 Apr 29 19:14:56.858: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:14:57.854: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:14:57.857: INFO: Number of nodes with available pods: 1 Apr 29 19:14:57.857: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:14:58.854: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:14:58.858: INFO: Number of nodes with available pods: 1 Apr 29 19:14:58.858: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:14:59.853: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:14:59.857: INFO: Number of nodes with available pods: 1 Apr 29 19:14:59.857: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:15:00.853: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:00.857: INFO: Number of nodes with available pods: 1 Apr 29 19:15:00.857: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:15:02.028: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:02.158: INFO: Number of nodes with available pods: 1 Apr 29 19:15:02.158: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:15:02.854: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:02.876: INFO: Number of nodes with available pods: 1 Apr 29 19:15:02.876: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:15:03.859: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:03.863: INFO: Number of nodes with available pods: 2 Apr 29 19:15:03.863: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2182, will wait for the garbage collector to delete the pods Apr 29 19:15:03.925: INFO: Deleting DaemonSet.extensions daemon-set took: 6.848768ms Apr 29 19:15:04.226: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.29722ms Apr 29 19:15:09.228: INFO: Number of nodes with available pods: 0 Apr 29 19:15:09.228: INFO: Number of running nodes: 0, number of available pods: 0 Apr 29 19:15:09.233: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2182/daemonsets","resourceVersion":"2880783"},"items":null} Apr 29 19:15:09.236: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2182/pods","resourceVersion":"2880783"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:15:09.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2182" for this suite. Apr 29 19:15:15.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:15:15.344: INFO: namespace daemonsets-2182 deletion completed in 6.095733205s • [SLOW TEST:24.712 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:15:15.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 29 19:15:15.463: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d118c3a-ec74-4eaa-af3f-d8f31587d4c1" in namespace "downward-api-8903" to be "success or failure" Apr 29 19:15:15.472: INFO: Pod "downwardapi-volume-2d118c3a-ec74-4eaa-af3f-d8f31587d4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.546128ms Apr 29 19:15:17.476: INFO: Pod "downwardapi-volume-2d118c3a-ec74-4eaa-af3f-d8f31587d4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013712405s Apr 29 19:15:19.480: INFO: Pod "downwardapi-volume-2d118c3a-ec74-4eaa-af3f-d8f31587d4c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017758995s STEP: Saw pod success Apr 29 19:15:19.481: INFO: Pod "downwardapi-volume-2d118c3a-ec74-4eaa-af3f-d8f31587d4c1" satisfied condition "success or failure" Apr 29 19:15:19.483: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2d118c3a-ec74-4eaa-af3f-d8f31587d4c1 container client-container: STEP: delete the pod Apr 29 19:15:19.523: INFO: Waiting for pod downwardapi-volume-2d118c3a-ec74-4eaa-af3f-d8f31587d4c1 to disappear Apr 29 19:15:19.538: INFO: Pod downwardapi-volume-2d118c3a-ec74-4eaa-af3f-d8f31587d4c1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:15:19.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8903" for this suite. Apr 29 19:15:25.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:15:25.645: INFO: namespace downward-api-8903 deletion completed in 6.103418774s • [SLOW TEST:10.300 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:15:25.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 29 19:15:25.783: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 29 19:15:25.847: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:25.865: INFO: Number of nodes with available pods: 0 Apr 29 19:15:25.865: INFO: Node iruya-worker is running more than one daemon pod Apr 29 19:15:26.870: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:26.873: INFO: Number of nodes with available pods: 0 Apr 29 19:15:26.873: INFO: Node iruya-worker is running more than one daemon pod Apr 29 19:15:28.029: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:28.111: INFO: Number of nodes with available pods: 0 Apr 29 19:15:28.111: INFO: Node iruya-worker is running more than one daemon pod Apr 29 19:15:28.920: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:28.923: INFO: Number of nodes with available pods: 0 Apr 29 19:15:28.923: INFO: Node iruya-worker is running more than one daemon pod Apr 29 19:15:29.871: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:29.875: INFO: Number of nodes with available pods: 1 Apr 29 19:15:29.875: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:15:30.870: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:30.874: INFO: Number of nodes with available pods: 2 Apr 29 19:15:30.874: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 29 19:15:30.901: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:30.901: INFO: Wrong image for pod: daemon-set-knvx9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:30.918: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:31.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:31.922: INFO: Wrong image for pod: daemon-set-knvx9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:31.930: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:32.923: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:32.923: INFO: Wrong image for pod: daemon-set-knvx9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:32.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:33.923: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:33.923: INFO: Wrong image for pod: daemon-set-knvx9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:33.923: INFO: Pod daemon-set-knvx9 is not available Apr 29 19:15:33.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:34.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:34.922: INFO: Wrong image for pod: daemon-set-knvx9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:34.922: INFO: Pod daemon-set-knvx9 is not available Apr 29 19:15:34.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:35.921: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:35.921: INFO: Wrong image for pod: daemon-set-knvx9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:35.921: INFO: Pod daemon-set-knvx9 is not available Apr 29 19:15:35.925: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:36.923: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:36.923: INFO: Wrong image for pod: daemon-set-knvx9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:36.923: INFO: Pod daemon-set-knvx9 is not available Apr 29 19:15:36.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:37.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:37.922: INFO: Wrong image for pod: daemon-set-knvx9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:37.922: INFO: Pod daemon-set-knvx9 is not available Apr 29 19:15:37.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:38.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:38.922: INFO: Wrong image for pod: daemon-set-knvx9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:38.922: INFO: Pod daemon-set-knvx9 is not available Apr 29 19:15:38.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:39.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:39.922: INFO: Pod daemon-set-nctpt is not available Apr 29 19:15:39.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:40.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:40.922: INFO: Pod daemon-set-nctpt is not available Apr 29 19:15:40.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:41.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:41.922: INFO: Pod daemon-set-nctpt is not available Apr 29 19:15:41.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:42.921: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:42.924: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:43.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:43.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:44.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:44.922: INFO: Pod daemon-set-2zjrz is not available Apr 29 19:15:44.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:45.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:45.922: INFO: Pod daemon-set-2zjrz is not available Apr 29 19:15:45.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:46.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:46.922: INFO: Pod daemon-set-2zjrz is not available Apr 29 19:15:46.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:47.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:47.922: INFO: Pod daemon-set-2zjrz is not available Apr 29 19:15:47.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:48.922: INFO: Wrong image for pod: daemon-set-2zjrz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 29 19:15:48.922: INFO: Pod daemon-set-2zjrz is not available Apr 29 19:15:48.925: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:49.922: INFO: Pod daemon-set-pg24l is not available Apr 29 19:15:49.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 29 19:15:49.932: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:49.935: INFO: Number of nodes with available pods: 1 Apr 29 19:15:49.935: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:15:50.941: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:50.944: INFO: Number of nodes with available pods: 1 Apr 29 19:15:50.944: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:15:51.940: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:51.944: INFO: Number of nodes with available pods: 1 Apr 29 19:15:51.944: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:15:52.941: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:52.958: INFO: Number of nodes with available pods: 1 Apr 29 19:15:52.958: INFO: Node iruya-worker2 is running more than one daemon pod Apr 29 19:15:53.941: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 29 19:15:53.945: INFO: Number of nodes with available pods: 2 Apr 29 19:15:53.945: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7051, will wait for the garbage collector to delete the pods Apr 29 19:15:54.017: INFO: Deleting DaemonSet.extensions daemon-set took: 6.942321ms Apr 29 19:15:54.317: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.243007ms Apr 29 19:15:59.220: INFO: Number of nodes with available pods: 0 Apr 29 19:15:59.220: INFO: Number of running nodes: 0, number of available pods: 0 Apr 29 19:15:59.222: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7051/daemonsets","resourceVersion":"2881004"},"items":null} Apr 29 19:15:59.224: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7051/pods","resourceVersion":"2881004"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:15:59.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7051" for this suite. Apr 29 19:16:05.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:16:05.360: INFO: namespace daemonsets-7051 deletion completed in 6.123437708s • [SLOW TEST:39.715 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:16:05.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 29 19:16:05.442: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 29 19:16:10.446: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 29 19:16:10.446: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 29 19:16:12.450: INFO: Creating deployment "test-rollover-deployment" Apr 29 19:16:12.459: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 29 19:16:14.466: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 29 19:16:14.472: INFO: Ensure that both replica sets have 1 created replica Apr 29 19:16:14.478: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 29 19:16:14.485: INFO: Updating deployment test-rollover-deployment Apr 29 19:16:14.485: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 29 19:16:16.495: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 29 19:16:16.502: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 29 19:16:16.508: INFO: all replica sets need to contain the pod-template-hash label Apr 29 19:16:16.508: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320574, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 19:16:18.517: INFO: all replica sets need to contain the pod-template-hash label Apr 29 19:16:18.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320574, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 19:16:20.516: INFO: all replica sets need to contain the pod-template-hash label Apr 29 19:16:20.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320578, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 19:16:22.524: INFO: all replica sets need to contain the pod-template-hash label Apr 29 19:16:22.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320578, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 19:16:24.516: INFO: all replica sets need to contain the pod-template-hash label Apr 29 19:16:24.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320578, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 19:16:26.539: INFO: all replica sets need to contain the pod-template-hash label Apr 29 19:16:26.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320578, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 19:16:28.529: INFO: all replica sets need to contain the pod-template-hash label Apr 29 19:16:28.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320578, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755320572, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 19:16:30.516: INFO: Apr 29 19:16:30.516: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 29 19:16:30.523: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7787,SelfLink:/apis/apps/v1/namespaces/deployment-7787/deployments/test-rollover-deployment,UID:e143c956-f3cc-4ccf-91f5-870872275412,ResourceVersion:2881172,Generation:2,CreationTimestamp:2021-04-29 19:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2021-04-29 19:16:12 +0000 UTC 2021-04-29 19:16:12 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-04-29 19:16:28 +0000 UTC 2021-04-29 19:16:12 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 29 19:16:30.527: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7787,SelfLink:/apis/apps/v1/namespaces/deployment-7787/replicasets/test-rollover-deployment-854595fc44,UID:f00fda2a-8f21-40a6-b1df-88c02d9ee689,ResourceVersion:2881161,Generation:2,CreationTimestamp:2021-04-29 19:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e143c956-f3cc-4ccf-91f5-870872275412 0xc000b18ef7 0xc000b18ef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 29 19:16:30.527: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 29 19:16:30.527: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7787,SelfLink:/apis/apps/v1/namespaces/deployment-7787/replicasets/test-rollover-controller,UID:efc161b5-e93a-4e5c-9fe3-5a5188e6a816,ResourceVersion:2881170,Generation:2,CreationTimestamp:2021-04-29 19:16:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e143c956-f3cc-4ccf-91f5-870872275412 0xc000b18e0f 0xc000b18e20}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 29 19:16:30.527: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7787,SelfLink:/apis/apps/v1/namespaces/deployment-7787/replicasets/test-rollover-deployment-9b8b997cf,UID:993f4538-ab03-44b5-a034-1812355dd3f7,ResourceVersion:2881125,Generation:2,CreationTimestamp:2021-04-29 19:16:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e143c956-f3cc-4ccf-91f5-870872275412 0xc000b18fc0 0xc000b18fc1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 29 19:16:30.531: INFO: Pod "test-rollover-deployment-854595fc44-zfjqn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-zfjqn,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7787,SelfLink:/api/v1/namespaces/deployment-7787/pods/test-rollover-deployment-854595fc44-zfjqn,UID:41f7f82c-4b81-4d90-b3a2-065ac2be07b5,ResourceVersion:2881139,Generation:0,CreationTimestamp:2021-04-29 19:16:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 f00fda2a-8f21-40a6-b1df-88c02d9ee689 0xc000b19bb7 0xc000b19bb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-clczb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-clczb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-clczb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b19c30} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b19c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:16:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:16:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:16:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:16:14 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.200,StartTime:2021-04-29 19:16:14 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2021-04-29 19:16:17 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://321b41fa73b25e7b67a77a8eaf83f169275bee6d0c13e6485484b889702f9525}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:16:30.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7787" for this suite. Apr 29 19:16:38.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:16:38.646: INFO: namespace deployment-7787 deletion completed in 8.111435721s • [SLOW TEST:33.286 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:16:38.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 29 19:16:38.766: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"bb85c1e4-6a61-4a51-b776-38ff5eaf7b55", Controller:(*bool)(0xc002b668e2), BlockOwnerDeletion:(*bool)(0xc002b668e3)}} Apr 29 19:16:38.772: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"194b2bc5-af57-4325-a636-83e27a272314", Controller:(*bool)(0xc0028c7c12), BlockOwnerDeletion:(*bool)(0xc0028c7c13)}} Apr 29 19:16:38.780: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d1294e4e-0813-4016-8f41-c405bf34fb6c", Controller:(*bool)(0xc002b66a6a), BlockOwnerDeletion:(*bool)(0xc002b66a6b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:16:43.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6539" for this suite. Apr 29 19:16:49.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:16:49.943: INFO: namespace gc-6539 deletion completed in 6.103098605s • [SLOW TEST:11.296 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:16:49.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 29 19:16:50.035: INFO: Waiting up to 5m0s for pod "pod-55e88274-9370-4cf3-89e2-f57490bb1bde" in namespace "emptydir-3627" to be "success or failure" Apr 29 19:16:50.037: INFO: Pod "pod-55e88274-9370-4cf3-89e2-f57490bb1bde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.504066ms Apr 29 19:16:52.041: INFO: Pod "pod-55e88274-9370-4cf3-89e2-f57490bb1bde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005914395s Apr 29 19:16:54.044: INFO: Pod "pod-55e88274-9370-4cf3-89e2-f57490bb1bde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009477013s STEP: Saw pod success Apr 29 19:16:54.044: INFO: Pod "pod-55e88274-9370-4cf3-89e2-f57490bb1bde" satisfied condition "success or failure" Apr 29 19:16:54.047: INFO: Trying to get logs from node iruya-worker pod pod-55e88274-9370-4cf3-89e2-f57490bb1bde container test-container: STEP: delete the pod Apr 29 19:16:54.191: INFO: Waiting for pod pod-55e88274-9370-4cf3-89e2-f57490bb1bde to disappear Apr 29 19:16:54.215: INFO: Pod pod-55e88274-9370-4cf3-89e2-f57490bb1bde no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:16:54.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3627" for this suite. Apr 29 19:17:00.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:17:00.311: INFO: namespace emptydir-3627 deletion completed in 6.093165942s • [SLOW TEST:10.368 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:17:00.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 29 19:17:00.369: INFO: Waiting up to 5m0s for pod "var-expansion-f946ab91-45dc-4a42-ae7e-524aa272c8b8" in namespace "var-expansion-3166" to be "success or failure" Apr 29 19:17:00.373: INFO: Pod "var-expansion-f946ab91-45dc-4a42-ae7e-524aa272c8b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.882334ms Apr 29 19:17:02.376: INFO: Pod "var-expansion-f946ab91-45dc-4a42-ae7e-524aa272c8b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007116438s Apr 29 19:17:04.381: INFO: Pod "var-expansion-f946ab91-45dc-4a42-ae7e-524aa272c8b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011683838s STEP: Saw pod success Apr 29 19:17:04.381: INFO: Pod "var-expansion-f946ab91-45dc-4a42-ae7e-524aa272c8b8" satisfied condition "success or failure" Apr 29 19:17:04.383: INFO: Trying to get logs from node iruya-worker pod var-expansion-f946ab91-45dc-4a42-ae7e-524aa272c8b8 container dapi-container: STEP: delete the pod Apr 29 19:17:04.409: INFO: Waiting for pod var-expansion-f946ab91-45dc-4a42-ae7e-524aa272c8b8 to disappear Apr 29 19:17:04.449: INFO: Pod var-expansion-f946ab91-45dc-4a42-ae7e-524aa272c8b8 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:17:04.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3166" for this suite. Apr 29 19:17:10.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:17:10.583: INFO: namespace var-expansion-3166 deletion completed in 6.129791421s • [SLOW TEST:10.271 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:17:10.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 29 19:17:11.188: INFO: Pod name wrapped-volume-race-0ee207c3-7291-471c-bfbb-7b2a2cb497fe: Found 0 pods out of 5 Apr 29 19:17:16.196: INFO: Pod name wrapped-volume-race-0ee207c3-7291-471c-bfbb-7b2a2cb497fe: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0ee207c3-7291-471c-bfbb-7b2a2cb497fe in namespace emptydir-wrapper-8205, will wait for the garbage collector to delete the pods Apr 29 19:17:32.277: INFO: Deleting ReplicationController wrapped-volume-race-0ee207c3-7291-471c-bfbb-7b2a2cb497fe took: 6.73965ms Apr 29 19:17:32.677: INFO: Terminating ReplicationController wrapped-volume-race-0ee207c3-7291-471c-bfbb-7b2a2cb497fe pods took: 400.320278ms STEP: Creating RC which spawns configmap-volume pods Apr 29 19:18:19.432: INFO: Pod name wrapped-volume-race-9064cddf-ede1-436f-ac8b-c94f1b80269b: Found 0 pods out of 5 Apr 29 19:18:24.439: INFO: Pod name wrapped-volume-race-9064cddf-ede1-436f-ac8b-c94f1b80269b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9064cddf-ede1-436f-ac8b-c94f1b80269b in namespace emptydir-wrapper-8205, will wait for the garbage collector to delete the pods Apr 29 19:18:38.585: INFO: Deleting ReplicationController wrapped-volume-race-9064cddf-ede1-436f-ac8b-c94f1b80269b took: 7.487575ms Apr 29 19:18:38.985: INFO: Terminating ReplicationController wrapped-volume-race-9064cddf-ede1-436f-ac8b-c94f1b80269b pods took: 400.311823ms STEP: Creating RC which spawns configmap-volume pods Apr 29 19:19:19.727: INFO: Pod name wrapped-volume-race-a6990bc3-acd6-4e40-aedd-0cd0427477e6: Found 0 pods out of 5 Apr 29 19:19:24.734: INFO: Pod name wrapped-volume-race-a6990bc3-acd6-4e40-aedd-0cd0427477e6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a6990bc3-acd6-4e40-aedd-0cd0427477e6 in namespace emptydir-wrapper-8205, will wait for the garbage collector to delete the pods Apr 29 19:19:38.875: INFO: Deleting ReplicationController wrapped-volume-race-a6990bc3-acd6-4e40-aedd-0cd0427477e6 took: 7.14221ms Apr 29 19:19:39.275: INFO: Terminating ReplicationController wrapped-volume-race-a6990bc3-acd6-4e40-aedd-0cd0427477e6 pods took: 400.26849ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:20:20.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8205" for this suite. Apr 29 19:20:30.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:20:31.057: INFO: namespace emptydir-wrapper-8205 deletion completed in 10.154348185s • [SLOW TEST:200.474 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:20:31.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:20:31.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4594" for this suite. Apr 29 19:20:37.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:20:37.301: INFO: namespace kubelet-test-4594 deletion completed in 6.112148558s • [SLOW TEST:6.244 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:20:37.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-4b1b1cf4-e582-4af7-a2ca-d67aece854e6 STEP: Creating a pod to test consume configMaps Apr 29 19:20:37.370: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-798004e1-6602-47ad-aeef-7221f2758cc7" in namespace "projected-1523" to be "success or failure" Apr 29 19:20:37.409: INFO: Pod "pod-projected-configmaps-798004e1-6602-47ad-aeef-7221f2758cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 39.349673ms Apr 29 19:20:39.512: INFO: Pod "pod-projected-configmaps-798004e1-6602-47ad-aeef-7221f2758cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141894875s Apr 29 19:20:41.516: INFO: Pod "pod-projected-configmaps-798004e1-6602-47ad-aeef-7221f2758cc7": Phase="Running", Reason="", readiness=true. Elapsed: 4.146265853s Apr 29 19:20:43.521: INFO: Pod "pod-projected-configmaps-798004e1-6602-47ad-aeef-7221f2758cc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.150750424s STEP: Saw pod success Apr 29 19:20:43.521: INFO: Pod "pod-projected-configmaps-798004e1-6602-47ad-aeef-7221f2758cc7" satisfied condition "success or failure" Apr 29 19:20:43.524: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-798004e1-6602-47ad-aeef-7221f2758cc7 container projected-configmap-volume-test: STEP: delete the pod Apr 29 19:20:43.557: INFO: Waiting for pod pod-projected-configmaps-798004e1-6602-47ad-aeef-7221f2758cc7 to disappear Apr 29 19:20:43.572: INFO: Pod pod-projected-configmaps-798004e1-6602-47ad-aeef-7221f2758cc7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:20:43.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1523" for this suite. Apr 29 19:20:49.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:20:49.685: INFO: namespace projected-1523 deletion completed in 6.107942868s • [SLOW TEST:12.383 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:20:49.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-bc85f1ad-3914-48ee-aab7-9399e2da7a5e STEP: Creating a pod to test consume secrets Apr 29 19:20:49.826: INFO: Waiting up to 5m0s for pod "pod-secrets-f2629365-276e-4fae-ad22-18344855ed89" in namespace "secrets-199" to be "success or failure" Apr 29 19:20:49.836: INFO: Pod "pod-secrets-f2629365-276e-4fae-ad22-18344855ed89": Phase="Pending", Reason="", readiness=false. Elapsed: 9.798144ms Apr 29 19:20:51.839: INFO: Pod "pod-secrets-f2629365-276e-4fae-ad22-18344855ed89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013004285s Apr 29 19:20:53.844: INFO: Pod "pod-secrets-f2629365-276e-4fae-ad22-18344855ed89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017639166s STEP: Saw pod success Apr 29 19:20:53.844: INFO: Pod "pod-secrets-f2629365-276e-4fae-ad22-18344855ed89" satisfied condition "success or failure" Apr 29 19:20:53.847: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-f2629365-276e-4fae-ad22-18344855ed89 container secret-volume-test: STEP: delete the pod Apr 29 19:20:53.872: INFO: Waiting for pod pod-secrets-f2629365-276e-4fae-ad22-18344855ed89 to disappear Apr 29 19:20:53.877: INFO: Pod pod-secrets-f2629365-276e-4fae-ad22-18344855ed89 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:20:53.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-199" for this suite. Apr 29 19:20:59.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:21:00.011: INFO: namespace secrets-199 deletion completed in 6.130819104s • [SLOW TEST:10.326 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:21:00.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-9aa3523a-a98b-456e-bc42-3aa1ae284d4e STEP: Creating a pod to test consume configMaps Apr 29 19:21:00.119: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e89c0fdf-2958-40e6-a872-e5ae3a426ab6" in namespace "projected-3481" to be "success or failure" Apr 29 19:21:00.135: INFO: Pod "pod-projected-configmaps-e89c0fdf-2958-40e6-a872-e5ae3a426ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.363811ms Apr 29 19:21:02.139: INFO: Pod "pod-projected-configmaps-e89c0fdf-2958-40e6-a872-e5ae3a426ab6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019309548s Apr 29 19:21:04.143: INFO: Pod "pod-projected-configmaps-e89c0fdf-2958-40e6-a872-e5ae3a426ab6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023614626s STEP: Saw pod success Apr 29 19:21:04.143: INFO: Pod "pod-projected-configmaps-e89c0fdf-2958-40e6-a872-e5ae3a426ab6" satisfied condition "success or failure" Apr 29 19:21:04.146: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-e89c0fdf-2958-40e6-a872-e5ae3a426ab6 container projected-configmap-volume-test: STEP: delete the pod Apr 29 19:21:04.293: INFO: Waiting for pod pod-projected-configmaps-e89c0fdf-2958-40e6-a872-e5ae3a426ab6 to disappear Apr 29 19:21:04.346: INFO: Pod pod-projected-configmaps-e89c0fdf-2958-40e6-a872-e5ae3a426ab6 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:21:04.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3481" for this suite. Apr 29 19:21:10.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:21:10.492: INFO: namespace projected-3481 deletion completed in 6.141385564s • [SLOW TEST:10.480 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:21:10.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-bf9c7c3b-951b-4509-9339-0671464faed6 STEP: Creating a pod to test consume configMaps Apr 29 19:21:10.545: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9d2bf75f-afa2-4648-b98e-2d43ef43b2f2" in namespace "projected-1993" to be "success or failure" Apr 29 19:21:10.583: INFO: Pod "pod-projected-configmaps-9d2bf75f-afa2-4648-b98e-2d43ef43b2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 38.616092ms Apr 29 19:21:12.656: INFO: Pod "pod-projected-configmaps-9d2bf75f-afa2-4648-b98e-2d43ef43b2f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111408258s Apr 29 19:21:14.660: INFO: Pod "pod-projected-configmaps-9d2bf75f-afa2-4648-b98e-2d43ef43b2f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11572727s STEP: Saw pod success Apr 29 19:21:14.661: INFO: Pod "pod-projected-configmaps-9d2bf75f-afa2-4648-b98e-2d43ef43b2f2" satisfied condition "success or failure" Apr 29 19:21:14.663: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-9d2bf75f-afa2-4648-b98e-2d43ef43b2f2 container projected-configmap-volume-test: STEP: delete the pod Apr 29 19:21:14.717: INFO: Waiting for pod pod-projected-configmaps-9d2bf75f-afa2-4648-b98e-2d43ef43b2f2 to disappear Apr 29 19:21:14.729: INFO: Pod pod-projected-configmaps-9d2bf75f-afa2-4648-b98e-2d43ef43b2f2 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:21:14.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1993" for this suite. Apr 29 19:21:20.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:21:20.827: INFO: namespace projected-1993 deletion completed in 6.095479432s • [SLOW TEST:10.335 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:21:20.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 29 19:21:20.904: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c99962c9-0eec-4b81-9334-e7a2934c29d9" in namespace "downward-api-3384" to be "success or failure" Apr 29 19:21:20.949: INFO: Pod "downwardapi-volume-c99962c9-0eec-4b81-9334-e7a2934c29d9": Phase="Pending", Reason="", readiness=false. Elapsed: 45.419646ms Apr 29 19:21:22.973: INFO: Pod "downwardapi-volume-c99962c9-0eec-4b81-9334-e7a2934c29d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069625516s Apr 29 19:21:24.977: INFO: Pod "downwardapi-volume-c99962c9-0eec-4b81-9334-e7a2934c29d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073498014s STEP: Saw pod success Apr 29 19:21:24.977: INFO: Pod "downwardapi-volume-c99962c9-0eec-4b81-9334-e7a2934c29d9" satisfied condition "success or failure" Apr 29 19:21:24.981: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c99962c9-0eec-4b81-9334-e7a2934c29d9 container client-container: STEP: delete the pod Apr 29 19:21:25.102: INFO: Waiting for pod downwardapi-volume-c99962c9-0eec-4b81-9334-e7a2934c29d9 to disappear Apr 29 19:21:25.106: INFO: Pod downwardapi-volume-c99962c9-0eec-4b81-9334-e7a2934c29d9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:21:25.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3384" for this suite. Apr 29 19:21:31.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:21:31.249: INFO: namespace downward-api-3384 deletion completed in 6.139575205s • [SLOW TEST:10.422 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:21:31.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-4905 I0429 19:21:31.316681 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4905, replica count: 1 I0429 19:21:32.367158 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 19:21:33.367485 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 19:21:34.367721 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 19:21:35.367981 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 19:21:35.530: INFO: Created: latency-svc-4wvxj Apr 29 19:21:35.538: INFO: Got endpoints: latency-svc-4wvxj [69.860248ms] Apr 29 19:21:35.581: INFO: Created: latency-svc-zd29v Apr 29 19:21:35.591: INFO: Got endpoints: latency-svc-zd29v [53.658956ms] Apr 29 19:21:35.605: INFO: Created: latency-svc-2pg5v Apr 29 19:21:35.616: INFO: Got endpoints: latency-svc-2pg5v [77.688609ms] Apr 29 19:21:35.629: INFO: Created: latency-svc-jkc62 Apr 29 19:21:35.662: INFO: Got endpoints: latency-svc-jkc62 [123.797665ms] Apr 29 19:21:35.671: INFO: Created: latency-svc-h6zvj Apr 29 19:21:35.682: INFO: Got endpoints: latency-svc-h6zvj [143.838566ms] Apr 29 19:21:35.702: INFO: Created: latency-svc-lmzv7 Apr 29 19:21:35.718: INFO: Got endpoints: latency-svc-lmzv7 [179.699606ms] Apr 29 19:21:35.737: INFO: Created: latency-svc-5qpcv Apr 29 19:21:35.760: INFO: Got endpoints: latency-svc-5qpcv [221.67595ms] Apr 29 19:21:35.806: INFO: Created: latency-svc-7qkqh Apr 29 19:21:35.813: INFO: Got endpoints: latency-svc-7qkqh [275.352548ms] Apr 29 19:21:35.863: INFO: Created: latency-svc-gtl7m Apr 29 19:21:35.893: INFO: Got endpoints: latency-svc-gtl7m [354.96596ms] Apr 29 19:21:35.935: INFO: Created: latency-svc-g6nq2 Apr 29 19:21:35.960: INFO: Got endpoints: latency-svc-g6nq2 [421.512286ms] Apr 29 19:21:35.961: INFO: Created: latency-svc-gtmgq Apr 29 19:21:35.974: INFO: Got endpoints: latency-svc-gtmgq [436.443587ms] Apr 29 19:21:36.008: INFO: Created: latency-svc-cmv4t Apr 29 19:21:36.029: INFO: Got endpoints: latency-svc-cmv4t [491.12559ms] Apr 29 19:21:36.067: INFO: Created: latency-svc-d929g Apr 29 19:21:36.082: INFO: Got endpoints: latency-svc-d929g [544.401576ms] Apr 29 19:21:36.115: INFO: Created: latency-svc-2xjdt Apr 29 19:21:36.144: INFO: Got endpoints: latency-svc-2xjdt [606.336241ms] Apr 29 19:21:36.195: INFO: Created: latency-svc-jbs5s Apr 29 19:21:36.217: INFO: Got endpoints: latency-svc-jbs5s [679.467897ms] Apr 29 19:21:36.217: INFO: Created: latency-svc-wlnsx Apr 29 19:21:36.232: INFO: Got endpoints: latency-svc-wlnsx [694.430334ms] Apr 29 19:21:36.254: INFO: Created: latency-svc-ts9fs Apr 29 19:21:36.272: INFO: Got endpoints: latency-svc-ts9fs [680.385384ms] Apr 29 19:21:36.351: INFO: Created: latency-svc-shf5w Apr 29 19:21:36.373: INFO: Got endpoints: latency-svc-shf5w [757.227234ms] Apr 29 19:21:36.374: INFO: Created: latency-svc-ghvdv Apr 29 19:21:36.402: INFO: Got endpoints: latency-svc-ghvdv [740.631205ms] Apr 29 19:21:36.421: INFO: Created: latency-svc-gwrmw Apr 29 19:21:36.431: INFO: Got endpoints: latency-svc-gwrmw [748.768706ms] Apr 29 19:21:36.483: INFO: Created: latency-svc-6kzpq Apr 29 19:21:36.505: INFO: Got endpoints: latency-svc-6kzpq [787.746347ms] Apr 29 19:21:36.505: INFO: Created: latency-svc-ncgx4 Apr 29 19:21:36.520: INFO: Got endpoints: latency-svc-ncgx4 [760.460823ms] Apr 29 19:21:36.542: INFO: Created: latency-svc-gpvfh Apr 29 19:21:36.557: INFO: Got endpoints: latency-svc-gpvfh [743.367395ms] Apr 29 19:21:36.578: INFO: Created: latency-svc-g244h Apr 29 19:21:36.632: INFO: Got endpoints: latency-svc-g244h [739.164216ms] Apr 29 19:21:36.649: INFO: Created: latency-svc-xbd2m Apr 29 19:21:36.665: INFO: Got endpoints: latency-svc-xbd2m [705.035095ms] Apr 29 19:21:36.685: INFO: Created: latency-svc-2tj9k Apr 29 19:21:36.700: INFO: Got endpoints: latency-svc-2tj9k [725.013139ms] Apr 29 19:21:36.721: INFO: Created: latency-svc-mnlgd Apr 29 19:21:36.751: INFO: Got endpoints: latency-svc-mnlgd [721.985382ms] Apr 29 19:21:36.763: INFO: Created: latency-svc-rw4w8 Apr 29 19:21:36.793: INFO: Got endpoints: latency-svc-rw4w8 [710.437373ms] Apr 29 19:21:36.829: INFO: Created: latency-svc-gv9n9 Apr 29 19:21:36.844: INFO: Got endpoints: latency-svc-gv9n9 [699.330347ms] Apr 29 19:21:36.878: INFO: Created: latency-svc-8wxhg Apr 29 19:21:36.901: INFO: Got endpoints: latency-svc-8wxhg [683.663511ms] Apr 29 19:21:36.902: INFO: Created: latency-svc-694f2 Apr 29 19:21:36.915: INFO: Got endpoints: latency-svc-694f2 [682.901977ms] Apr 29 19:21:36.937: INFO: Created: latency-svc-m4ztx Apr 29 19:21:36.945: INFO: Got endpoints: latency-svc-m4ztx [673.463096ms] Apr 29 19:21:36.967: INFO: Created: latency-svc-qffhd Apr 29 19:21:37.015: INFO: Got endpoints: latency-svc-qffhd [642.14941ms] Apr 29 19:21:37.039: INFO: Created: latency-svc-tjwtj Apr 29 19:21:37.053: INFO: Got endpoints: latency-svc-tjwtj [650.934858ms] Apr 29 19:21:37.075: INFO: Created: latency-svc-p64zs Apr 29 19:21:37.090: INFO: Got endpoints: latency-svc-p64zs [659.528129ms] Apr 29 19:21:37.105: INFO: Created: latency-svc-px76n Apr 29 19:21:37.146: INFO: Got endpoints: latency-svc-px76n [640.916735ms] Apr 29 19:21:37.159: INFO: Created: latency-svc-tlps9 Apr 29 19:21:37.174: INFO: Got endpoints: latency-svc-tlps9 [653.087851ms] Apr 29 19:21:37.195: INFO: Created: latency-svc-94gn9 Apr 29 19:21:37.209: INFO: Got endpoints: latency-svc-94gn9 [652.412224ms] Apr 29 19:21:37.231: INFO: Created: latency-svc-p4wwr Apr 29 19:21:37.245: INFO: Got endpoints: latency-svc-p4wwr [612.553848ms] Apr 29 19:21:37.291: INFO: Created: latency-svc-cfnjx Apr 29 19:21:37.302: INFO: Got endpoints: latency-svc-cfnjx [637.539142ms] Apr 29 19:21:37.339: INFO: Created: latency-svc-xk297 Apr 29 19:21:37.359: INFO: Got endpoints: latency-svc-xk297 [659.324044ms] Apr 29 19:21:37.476: INFO: Created: latency-svc-zp5jx Apr 29 19:21:37.543: INFO: Got endpoints: latency-svc-zp5jx [791.559035ms] Apr 29 19:21:37.544: INFO: Created: latency-svc-lmb2b Apr 29 19:21:37.569: INFO: Got endpoints: latency-svc-lmb2b [775.859968ms] Apr 29 19:21:37.614: INFO: Created: latency-svc-8jk5b Apr 29 19:21:37.657: INFO: Got endpoints: latency-svc-8jk5b [813.188566ms] Apr 29 19:21:37.681: INFO: Created: latency-svc-vdnhg Apr 29 19:21:37.700: INFO: Got endpoints: latency-svc-vdnhg [799.167297ms] Apr 29 19:21:37.757: INFO: Created: latency-svc-mtmsl Apr 29 19:21:37.760: INFO: Got endpoints: latency-svc-mtmsl [844.753906ms] Apr 29 19:21:37.782: INFO: Created: latency-svc-8dbdf Apr 29 19:21:37.796: INFO: Got endpoints: latency-svc-8dbdf [850.81514ms] Apr 29 19:21:37.813: INFO: Created: latency-svc-cldq7 Apr 29 19:21:37.820: INFO: Got endpoints: latency-svc-cldq7 [804.92009ms] Apr 29 19:21:37.848: INFO: Created: latency-svc-q2c7r Apr 29 19:21:37.889: INFO: Got endpoints: latency-svc-q2c7r [835.749902ms] Apr 29 19:21:37.903: INFO: Created: latency-svc-5v59d Apr 29 19:21:37.917: INFO: Got endpoints: latency-svc-5v59d [826.346294ms] Apr 29 19:21:37.938: INFO: Created: latency-svc-v9tz8 Apr 29 19:21:37.970: INFO: Got endpoints: latency-svc-v9tz8 [823.668108ms] Apr 29 19:21:38.027: INFO: Created: latency-svc-69qvv Apr 29 19:21:38.058: INFO: Got endpoints: latency-svc-69qvv [884.403244ms] Apr 29 19:21:38.058: INFO: Created: latency-svc-xgcwv Apr 29 19:21:38.071: INFO: Got endpoints: latency-svc-xgcwv [861.823257ms] Apr 29 19:21:38.107: INFO: Created: latency-svc-cmkkl Apr 29 19:21:38.119: INFO: Got endpoints: latency-svc-cmkkl [874.287089ms] Apr 29 19:21:38.159: INFO: Created: latency-svc-srmtd Apr 29 19:21:38.173: INFO: Got endpoints: latency-svc-srmtd [870.456676ms] Apr 29 19:21:38.174: INFO: Created: latency-svc-rxdwf Apr 29 19:21:38.185: INFO: Got endpoints: latency-svc-rxdwf [826.258174ms] Apr 29 19:21:38.203: INFO: Created: latency-svc-h898k Apr 29 19:21:38.216: INFO: Got endpoints: latency-svc-h898k [672.841528ms] Apr 29 19:21:38.232: INFO: Created: latency-svc-9sh7p Apr 29 19:21:38.245: INFO: Got endpoints: latency-svc-9sh7p [676.426276ms] Apr 29 19:21:38.321: INFO: Created: latency-svc-bmbnh Apr 29 19:21:38.346: INFO: Got endpoints: latency-svc-bmbnh [688.966064ms] Apr 29 19:21:38.347: INFO: Created: latency-svc-df89r Apr 29 19:21:38.360: INFO: Got endpoints: latency-svc-df89r [659.926898ms] Apr 29 19:21:38.376: INFO: Created: latency-svc-5ddqm Apr 29 19:21:38.401: INFO: Got endpoints: latency-svc-5ddqm [640.337882ms] Apr 29 19:21:38.458: INFO: Created: latency-svc-hwgxd Apr 29 19:21:38.478: INFO: Created: latency-svc-p9q6s Apr 29 19:21:38.478: INFO: Got endpoints: latency-svc-hwgxd [681.675876ms] Apr 29 19:21:38.491: INFO: Got endpoints: latency-svc-p9q6s [671.145688ms] Apr 29 19:21:38.508: INFO: Created: latency-svc-gv4jx Apr 29 19:21:38.521: INFO: Got endpoints: latency-svc-gv4jx [631.983466ms] Apr 29 19:21:38.538: INFO: Created: latency-svc-jwks5 Apr 29 19:21:38.551: INFO: Got endpoints: latency-svc-jwks5 [634.876129ms] Apr 29 19:21:38.590: INFO: Created: latency-svc-kqbxl Apr 29 19:21:38.622: INFO: Got endpoints: latency-svc-kqbxl [652.146913ms] Apr 29 19:21:38.625: INFO: Created: latency-svc-47njm Apr 29 19:21:38.641: INFO: Got endpoints: latency-svc-47njm [582.656733ms] Apr 29 19:21:38.659: INFO: Created: latency-svc-s4q65 Apr 29 19:21:38.670: INFO: Got endpoints: latency-svc-s4q65 [599.212052ms] Apr 29 19:21:38.728: INFO: Created: latency-svc-c2tff Apr 29 19:21:38.760: INFO: Created: latency-svc-hvzxg Apr 29 19:21:38.760: INFO: Got endpoints: latency-svc-c2tff [641.055895ms] Apr 29 19:21:38.773: INFO: Got endpoints: latency-svc-hvzxg [600.549748ms] Apr 29 19:21:38.796: INFO: Created: latency-svc-n5ltl Apr 29 19:21:38.826: INFO: Got endpoints: latency-svc-n5ltl [640.651666ms] Apr 29 19:21:38.866: INFO: Created: latency-svc-c75d2 Apr 29 19:21:38.891: INFO: Got endpoints: latency-svc-c75d2 [675.444463ms] Apr 29 19:21:38.892: INFO: Created: latency-svc-4f9x7 Apr 29 19:21:38.904: INFO: Got endpoints: latency-svc-4f9x7 [658.971635ms] Apr 29 19:21:38.928: INFO: Created: latency-svc-5zbhk Apr 29 19:21:38.941: INFO: Got endpoints: latency-svc-5zbhk [594.596437ms] Apr 29 19:21:38.958: INFO: Created: latency-svc-k68sm Apr 29 19:21:38.997: INFO: Got endpoints: latency-svc-k68sm [636.702303ms] Apr 29 19:21:39.000: INFO: Created: latency-svc-4n9qj Apr 29 19:21:39.013: INFO: Got endpoints: latency-svc-4n9qj [611.861605ms] Apr 29 19:21:39.037: INFO: Created: latency-svc-b9plk Apr 29 19:21:39.049: INFO: Got endpoints: latency-svc-b9plk [571.350831ms] Apr 29 19:21:39.066: INFO: Created: latency-svc-4z442 Apr 29 19:21:39.078: INFO: Got endpoints: latency-svc-4z442 [586.89352ms] Apr 29 19:21:39.097: INFO: Created: latency-svc-7cdcq Apr 29 19:21:39.135: INFO: Got endpoints: latency-svc-7cdcq [613.324494ms] Apr 29 19:21:39.162: INFO: Created: latency-svc-7jsx6 Apr 29 19:21:39.187: INFO: Got endpoints: latency-svc-7jsx6 [635.402109ms] Apr 29 19:21:39.292: INFO: Created: latency-svc-zt96l Apr 29 19:21:39.314: INFO: Created: latency-svc-pxnnv Apr 29 19:21:39.314: INFO: Got endpoints: latency-svc-zt96l [691.323488ms] Apr 29 19:21:39.336: INFO: Got endpoints: latency-svc-pxnnv [695.096373ms] Apr 29 19:21:39.354: INFO: Created: latency-svc-5qvrh Apr 29 19:21:39.366: INFO: Got endpoints: latency-svc-5qvrh [695.374184ms] Apr 29 19:21:39.385: INFO: Created: latency-svc-nd6w8 Apr 29 19:21:39.416: INFO: Got endpoints: latency-svc-nd6w8 [655.719637ms] Apr 29 19:21:39.432: INFO: Created: latency-svc-nlrld Apr 29 19:21:39.444: INFO: Got endpoints: latency-svc-nlrld [670.498443ms] Apr 29 19:21:39.480: INFO: Created: latency-svc-5p47x Apr 29 19:21:39.498: INFO: Got endpoints: latency-svc-5p47x [671.740639ms] Apr 29 19:21:39.585: INFO: Created: latency-svc-fxbc6 Apr 29 19:21:39.630: INFO: Got endpoints: latency-svc-fxbc6 [739.050482ms] Apr 29 19:21:39.632: INFO: Created: latency-svc-dlgwb Apr 29 19:21:39.641: INFO: Got endpoints: latency-svc-dlgwb [737.015629ms] Apr 29 19:21:39.661: INFO: Created: latency-svc-ll659 Apr 29 19:21:39.678: INFO: Got endpoints: latency-svc-ll659 [736.909286ms] Apr 29 19:21:39.722: INFO: Created: latency-svc-xt2dt Apr 29 19:21:39.726: INFO: Got endpoints: latency-svc-xt2dt [728.786715ms] Apr 29 19:21:39.751: INFO: Created: latency-svc-zh6gf Apr 29 19:21:39.761: INFO: Got endpoints: latency-svc-zh6gf [748.76478ms] Apr 29 19:21:39.780: INFO: Created: latency-svc-k6wfb Apr 29 19:21:39.791: INFO: Got endpoints: latency-svc-k6wfb [742.014639ms] Apr 29 19:21:39.859: INFO: Created: latency-svc-2vrj9 Apr 29 19:21:39.883: INFO: Created: latency-svc-wttkm Apr 29 19:21:39.883: INFO: Got endpoints: latency-svc-2vrj9 [804.597523ms] Apr 29 19:21:39.918: INFO: Got endpoints: latency-svc-wttkm [783.031123ms] Apr 29 19:21:40.003: INFO: Created: latency-svc-l8nlr Apr 29 19:21:40.005: INFO: Created: latency-svc-gtctk Apr 29 19:21:40.025: INFO: Got endpoints: latency-svc-gtctk [710.930698ms] Apr 29 19:21:40.025: INFO: Got endpoints: latency-svc-l8nlr [837.795199ms] Apr 29 19:21:40.069: INFO: Created: latency-svc-qnsc2 Apr 29 19:21:40.085: INFO: Got endpoints: latency-svc-qnsc2 [749.122582ms] Apr 29 19:21:40.141: INFO: Created: latency-svc-xq7c4 Apr 29 19:21:40.164: INFO: Got endpoints: latency-svc-xq7c4 [797.911912ms] Apr 29 19:21:40.165: INFO: Created: latency-svc-dw642 Apr 29 19:21:40.175: INFO: Got endpoints: latency-svc-dw642 [758.535504ms] Apr 29 19:21:40.194: INFO: Created: latency-svc-hkz85 Apr 29 19:21:40.211: INFO: Got endpoints: latency-svc-hkz85 [766.599273ms] Apr 29 19:21:40.230: INFO: Created: latency-svc-dp8v8 Apr 29 19:21:40.273: INFO: Got endpoints: latency-svc-dp8v8 [774.847521ms] Apr 29 19:21:40.274: INFO: Created: latency-svc-x9fb8 Apr 29 19:21:40.283: INFO: Got endpoints: latency-svc-x9fb8 [652.095157ms] Apr 29 19:21:40.331: INFO: Created: latency-svc-qdvx8 Apr 29 19:21:40.349: INFO: Got endpoints: latency-svc-qdvx8 [707.282014ms] Apr 29 19:21:40.405: INFO: Created: latency-svc-2wn7v Apr 29 19:21:40.427: INFO: Got endpoints: latency-svc-2wn7v [749.294122ms] Apr 29 19:21:40.429: INFO: Created: latency-svc-stblq Apr 29 19:21:40.439: INFO: Got endpoints: latency-svc-stblq [713.205546ms] Apr 29 19:21:40.458: INFO: Created: latency-svc-jg662 Apr 29 19:21:40.469: INFO: Got endpoints: latency-svc-jg662 [707.314006ms] Apr 29 19:21:40.487: INFO: Created: latency-svc-wgtr5 Apr 29 19:21:40.542: INFO: Got endpoints: latency-svc-wgtr5 [750.172061ms] Apr 29 19:21:40.554: INFO: Created: latency-svc-52j8h Apr 29 19:21:40.570: INFO: Got endpoints: latency-svc-52j8h [687.228988ms] Apr 29 19:21:40.584: INFO: Created: latency-svc-jfhwv Apr 29 19:21:40.594: INFO: Got endpoints: latency-svc-jfhwv [676.514514ms] Apr 29 19:21:40.608: INFO: Created: latency-svc-76xbg Apr 29 19:21:40.625: INFO: Got endpoints: latency-svc-76xbg [599.852446ms] Apr 29 19:21:40.638: INFO: Created: latency-svc-r4qz7 Apr 29 19:21:40.692: INFO: Got endpoints: latency-svc-r4qz7 [667.190945ms] Apr 29 19:21:40.694: INFO: Created: latency-svc-jbrvd Apr 29 19:21:40.702: INFO: Got endpoints: latency-svc-jbrvd [616.654707ms] Apr 29 19:21:40.721: INFO: Created: latency-svc-4ldp6 Apr 29 19:21:40.751: INFO: Got endpoints: latency-svc-4ldp6 [587.026449ms] Apr 29 19:21:40.782: INFO: Created: latency-svc-vkqz4 Apr 29 19:21:40.826: INFO: Got endpoints: latency-svc-vkqz4 [651.322187ms] Apr 29 19:21:40.836: INFO: Created: latency-svc-r29r2 Apr 29 19:21:40.870: INFO: Got endpoints: latency-svc-r29r2 [659.728861ms] Apr 29 19:21:40.889: INFO: Created: latency-svc-qbcnl Apr 29 19:21:40.906: INFO: Got endpoints: latency-svc-qbcnl [633.739513ms] Apr 29 19:21:40.974: INFO: Created: latency-svc-wlfpr Apr 29 19:21:40.997: INFO: Got endpoints: latency-svc-wlfpr [714.329278ms] Apr 29 19:21:40.998: INFO: Created: latency-svc-zgghj Apr 29 19:21:41.008: INFO: Got endpoints: latency-svc-zgghj [659.298373ms] Apr 29 19:21:41.021: INFO: Created: latency-svc-qlwjq Apr 29 19:21:41.032: INFO: Got endpoints: latency-svc-qlwjq [604.72001ms] Apr 29 19:21:41.052: INFO: Created: latency-svc-h5qfr Apr 29 19:21:41.068: INFO: Got endpoints: latency-svc-h5qfr [629.156774ms] Apr 29 19:21:41.111: INFO: Created: latency-svc-bksfk Apr 29 19:21:41.116: INFO: Got endpoints: latency-svc-bksfk [647.069379ms] Apr 29 19:21:41.142: INFO: Created: latency-svc-6pgml Apr 29 19:21:41.157: INFO: Got endpoints: latency-svc-6pgml [615.146926ms] Apr 29 19:21:41.184: INFO: Created: latency-svc-cwcvv Apr 29 19:21:41.199: INFO: Got endpoints: latency-svc-cwcvv [628.771607ms] Apr 29 19:21:41.244: INFO: Created: latency-svc-9rtfl Apr 29 19:21:41.261: INFO: Got endpoints: latency-svc-9rtfl [666.651582ms] Apr 29 19:21:41.262: INFO: Created: latency-svc-b4j2t Apr 29 19:21:41.277: INFO: Got endpoints: latency-svc-b4j2t [652.535649ms] Apr 29 19:21:41.304: INFO: Created: latency-svc-cmb2j Apr 29 19:21:41.325: INFO: Got endpoints: latency-svc-cmb2j [633.026534ms] Apr 29 19:21:41.370: INFO: Created: latency-svc-5ffv8 Apr 29 19:21:41.405: INFO: Got endpoints: latency-svc-5ffv8 [702.827895ms] Apr 29 19:21:41.407: INFO: Created: latency-svc-4fwmb Apr 29 19:21:41.415: INFO: Got endpoints: latency-svc-4fwmb [663.785085ms] Apr 29 19:21:41.435: INFO: Created: latency-svc-58ffw Apr 29 19:21:41.445: INFO: Got endpoints: latency-svc-58ffw [619.134239ms] Apr 29 19:21:41.525: INFO: Created: latency-svc-6fxjb Apr 29 19:21:41.556: INFO: Got endpoints: latency-svc-6fxjb [685.099451ms] Apr 29 19:21:41.556: INFO: Created: latency-svc-mdgll Apr 29 19:21:41.572: INFO: Got endpoints: latency-svc-mdgll [665.16076ms] Apr 29 19:21:41.592: INFO: Created: latency-svc-vn2kc Apr 29 19:21:41.607: INFO: Got endpoints: latency-svc-vn2kc [610.209159ms] Apr 29 19:21:41.662: INFO: Created: latency-svc-4sbz5 Apr 29 19:21:41.705: INFO: Got endpoints: latency-svc-4sbz5 [696.876074ms] Apr 29 19:21:41.706: INFO: Created: latency-svc-sw6wl Apr 29 19:21:41.721: INFO: Got endpoints: latency-svc-sw6wl [689.305219ms] Apr 29 19:21:41.794: INFO: Created: latency-svc-sgmd7 Apr 29 19:21:41.819: INFO: Created: latency-svc-v2wn9 Apr 29 19:21:41.819: INFO: Got endpoints: latency-svc-sgmd7 [750.804778ms] Apr 29 19:21:41.834: INFO: Got endpoints: latency-svc-v2wn9 [718.358985ms] Apr 29 19:21:41.849: INFO: Created: latency-svc-hvqzm Apr 29 19:21:41.858: INFO: Got endpoints: latency-svc-hvqzm [700.917398ms] Apr 29 19:21:41.874: INFO: Created: latency-svc-ctqzr Apr 29 19:21:41.931: INFO: Got endpoints: latency-svc-ctqzr [732.285805ms] Apr 29 19:21:41.951: INFO: Created: latency-svc-pvjbr Apr 29 19:21:41.966: INFO: Got endpoints: latency-svc-pvjbr [704.811448ms] Apr 29 19:21:42.000: INFO: Created: latency-svc-x7zbt Apr 29 19:21:42.014: INFO: Got endpoints: latency-svc-x7zbt [736.736345ms] Apr 29 19:21:42.030: INFO: Created: latency-svc-r2qcx Apr 29 19:21:42.057: INFO: Got endpoints: latency-svc-r2qcx [731.788788ms] Apr 29 19:21:42.084: INFO: Created: latency-svc-4psbm Apr 29 19:21:42.098: INFO: Got endpoints: latency-svc-4psbm [693.078143ms] Apr 29 19:21:42.119: INFO: Created: latency-svc-t9mtx Apr 29 19:21:42.134: INFO: Got endpoints: latency-svc-t9mtx [719.613431ms] Apr 29 19:21:42.189: INFO: Created: latency-svc-jzf9t Apr 29 19:21:42.194: INFO: Got endpoints: latency-svc-jzf9t [748.655035ms] Apr 29 19:21:42.209: INFO: Created: latency-svc-glxhg Apr 29 19:21:42.218: INFO: Got endpoints: latency-svc-glxhg [662.28722ms] Apr 29 19:21:42.233: INFO: Created: latency-svc-xlq5k Apr 29 19:21:42.248: INFO: Got endpoints: latency-svc-xlq5k [676.569963ms] Apr 29 19:21:42.269: INFO: Created: latency-svc-s2cgd Apr 29 19:21:42.284: INFO: Got endpoints: latency-svc-s2cgd [677.12031ms] Apr 29 19:21:42.321: INFO: Created: latency-svc-lgjcz Apr 29 19:21:42.341: INFO: Got endpoints: latency-svc-lgjcz [635.341852ms] Apr 29 19:21:42.364: INFO: Created: latency-svc-kfs8l Apr 29 19:21:42.374: INFO: Got endpoints: latency-svc-kfs8l [652.473205ms] Apr 29 19:21:42.401: INFO: Created: latency-svc-xrk7k Apr 29 19:21:42.416: INFO: Got endpoints: latency-svc-xrk7k [596.831259ms] Apr 29 19:21:42.458: INFO: Created: latency-svc-lcbx9 Apr 29 19:21:42.473: INFO: Got endpoints: latency-svc-lcbx9 [638.403749ms] Apr 29 19:21:42.497: INFO: Created: latency-svc-jkdfl Apr 29 19:21:42.511: INFO: Got endpoints: latency-svc-jkdfl [653.222098ms] Apr 29 19:21:42.527: INFO: Created: latency-svc-46rvp Apr 29 19:21:42.541: INFO: Got endpoints: latency-svc-46rvp [609.517188ms] Apr 29 19:21:42.557: INFO: Created: latency-svc-kpq8p Apr 29 19:21:42.590: INFO: Got endpoints: latency-svc-kpq8p [623.886818ms] Apr 29 19:21:42.611: INFO: Created: latency-svc-dq65n Apr 29 19:21:42.637: INFO: Got endpoints: latency-svc-dq65n [623.43001ms] Apr 29 19:21:42.659: INFO: Created: latency-svc-n8jcx Apr 29 19:21:42.673: INFO: Got endpoints: latency-svc-n8jcx [615.936992ms] Apr 29 19:21:42.719: INFO: Created: latency-svc-6k5ss Apr 29 19:21:42.739: INFO: Got endpoints: latency-svc-6k5ss [641.449448ms] Apr 29 19:21:42.768: INFO: Created: latency-svc-74qgp Apr 29 19:21:42.782: INFO: Got endpoints: latency-svc-74qgp [647.054654ms] Apr 29 19:21:42.848: INFO: Created: latency-svc-sb5zf Apr 29 19:21:42.880: INFO: Got endpoints: latency-svc-sb5zf [685.953485ms] Apr 29 19:21:42.881: INFO: Created: latency-svc-85sb5 Apr 29 19:21:42.896: INFO: Got endpoints: latency-svc-85sb5 [677.540363ms] Apr 29 19:21:42.911: INFO: Created: latency-svc-znpjc Apr 29 19:21:42.919: INFO: Got endpoints: latency-svc-znpjc [671.237765ms] Apr 29 19:21:42.934: INFO: Created: latency-svc-fq2dx Apr 29 19:21:42.943: INFO: Got endpoints: latency-svc-fq2dx [658.927725ms] Apr 29 19:21:42.979: INFO: Created: latency-svc-k9cpr Apr 29 19:21:42.985: INFO: Got endpoints: latency-svc-k9cpr [644.612701ms] Apr 29 19:21:43.013: INFO: Created: latency-svc-wmxvn Apr 29 19:21:43.027: INFO: Got endpoints: latency-svc-wmxvn [652.787371ms] Apr 29 19:21:43.049: INFO: Created: latency-svc-ddxtg Apr 29 19:21:43.074: INFO: Got endpoints: latency-svc-ddxtg [658.325732ms] Apr 29 19:21:43.124: INFO: Created: latency-svc-4wsq7 Apr 29 19:21:43.163: INFO: Got endpoints: latency-svc-4wsq7 [690.287232ms] Apr 29 19:21:43.163: INFO: Created: latency-svc-cgvsh Apr 29 19:21:43.199: INFO: Got endpoints: latency-svc-cgvsh [687.772821ms] Apr 29 19:21:43.273: INFO: Created: latency-svc-mz48w Apr 29 19:21:43.301: INFO: Got endpoints: latency-svc-mz48w [759.844572ms] Apr 29 19:21:43.301: INFO: Created: latency-svc-lwk6g Apr 29 19:21:43.349: INFO: Got endpoints: latency-svc-lwk6g [759.048096ms] Apr 29 19:21:43.411: INFO: Created: latency-svc-k6592 Apr 29 19:21:43.427: INFO: Got endpoints: latency-svc-k6592 [789.495686ms] Apr 29 19:21:43.428: INFO: Created: latency-svc-l98nd Apr 29 19:21:43.440: INFO: Got endpoints: latency-svc-l98nd [767.192125ms] Apr 29 19:21:43.475: INFO: Created: latency-svc-z9mt6 Apr 29 19:21:43.584: INFO: Got endpoints: latency-svc-z9mt6 [844.65223ms] Apr 29 19:21:43.586: INFO: Created: latency-svc-qvzwb Apr 29 19:21:43.602: INFO: Got endpoints: latency-svc-qvzwb [820.871266ms] Apr 29 19:21:43.637: INFO: Created: latency-svc-6pzf7 Apr 29 19:21:43.650: INFO: Got endpoints: latency-svc-6pzf7 [770.132045ms] Apr 29 19:21:43.674: INFO: Created: latency-svc-482x8 Apr 29 19:21:43.716: INFO: Got endpoints: latency-svc-482x8 [820.339689ms] Apr 29 19:21:43.751: INFO: Created: latency-svc-fnq6t Apr 29 19:21:43.764: INFO: Got endpoints: latency-svc-fnq6t [844.691948ms] Apr 29 19:21:43.787: INFO: Created: latency-svc-f9l59 Apr 29 19:21:43.805: INFO: Got endpoints: latency-svc-f9l59 [861.768732ms] Apr 29 19:21:43.842: INFO: Created: latency-svc-hd6vc Apr 29 19:21:43.859: INFO: Created: latency-svc-7m8mn Apr 29 19:21:43.859: INFO: Got endpoints: latency-svc-hd6vc [873.84678ms] Apr 29 19:21:43.883: INFO: Got endpoints: latency-svc-7m8mn [856.093251ms] Apr 29 19:21:43.907: INFO: Created: latency-svc-5slns Apr 29 19:21:43.919: INFO: Got endpoints: latency-svc-5slns [844.83343ms] Apr 29 19:21:43.937: INFO: Created: latency-svc-66kzm Apr 29 19:21:43.974: INFO: Got endpoints: latency-svc-66kzm [810.456486ms] Apr 29 19:21:43.998: INFO: Created: latency-svc-ntmpn Apr 29 19:21:44.009: INFO: Got endpoints: latency-svc-ntmpn [810.162535ms] Apr 29 19:21:44.033: INFO: Created: latency-svc-cvm5m Apr 29 19:21:44.045: INFO: Got endpoints: latency-svc-cvm5m [744.235908ms] Apr 29 19:21:44.063: INFO: Created: latency-svc-q2grt Apr 29 19:21:44.095: INFO: Got endpoints: latency-svc-q2grt [746.080269ms] Apr 29 19:21:44.105: INFO: Created: latency-svc-c57p9 Apr 29 19:21:44.128: INFO: Got endpoints: latency-svc-c57p9 [701.199146ms] Apr 29 19:21:44.159: INFO: Created: latency-svc-w5pqp Apr 29 19:21:44.172: INFO: Got endpoints: latency-svc-w5pqp [731.103322ms] Apr 29 19:21:44.195: INFO: Created: latency-svc-g7gfb Apr 29 19:21:44.219: INFO: Got endpoints: latency-svc-g7gfb [634.311677ms] Apr 29 19:21:44.237: INFO: Created: latency-svc-nvn52 Apr 29 19:21:44.249: INFO: Got endpoints: latency-svc-nvn52 [646.716384ms] Apr 29 19:21:44.266: INFO: Created: latency-svc-gwcs8 Apr 29 19:21:44.279: INFO: Got endpoints: latency-svc-gwcs8 [628.878802ms] Apr 29 19:21:44.297: INFO: Created: latency-svc-dxj6h Apr 29 19:21:44.339: INFO: Got endpoints: latency-svc-dxj6h [622.735881ms] Apr 29 19:21:44.351: INFO: Created: latency-svc-27n96 Apr 29 19:21:44.369: INFO: Got endpoints: latency-svc-27n96 [604.369452ms] Apr 29 19:21:44.434: INFO: Created: latency-svc-skpph Apr 29 19:21:44.471: INFO: Got endpoints: latency-svc-skpph [665.248284ms] Apr 29 19:21:44.490: INFO: Created: latency-svc-bvckb Apr 29 19:21:44.500: INFO: Got endpoints: latency-svc-bvckb [641.274793ms] Apr 29 19:21:44.531: INFO: Created: latency-svc-75mf7 Apr 29 19:21:44.542: INFO: Got endpoints: latency-svc-75mf7 [659.227621ms] Apr 29 19:21:44.562: INFO: Created: latency-svc-rkdd6 Apr 29 19:21:44.590: INFO: Got endpoints: latency-svc-rkdd6 [670.483382ms] Apr 29 19:21:44.602: INFO: Created: latency-svc-76hg9 Apr 29 19:21:44.614: INFO: Got endpoints: latency-svc-76hg9 [640.647219ms] Apr 29 19:21:44.645: INFO: Created: latency-svc-6vg2n Apr 29 19:21:44.669: INFO: Got endpoints: latency-svc-6vg2n [659.477641ms] Apr 29 19:21:44.686: INFO: Created: latency-svc-nmm6b Apr 29 19:21:44.734: INFO: Got endpoints: latency-svc-nmm6b [688.696964ms] Apr 29 19:21:44.759: INFO: Created: latency-svc-tl48j Apr 29 19:21:44.771: INFO: Got endpoints: latency-svc-tl48j [675.842787ms] Apr 29 19:21:44.789: INFO: Created: latency-svc-zfkx4 Apr 29 19:21:44.801: INFO: Got endpoints: latency-svc-zfkx4 [672.366522ms] Apr 29 19:21:44.884: INFO: Created: latency-svc-pvrqc Apr 29 19:21:44.887: INFO: Got endpoints: latency-svc-pvrqc [715.785459ms] Apr 29 19:21:44.888: INFO: Latencies: [53.658956ms 77.688609ms 123.797665ms 143.838566ms 179.699606ms 221.67595ms 275.352548ms 354.96596ms 421.512286ms 436.443587ms 491.12559ms 544.401576ms 571.350831ms 582.656733ms 586.89352ms 587.026449ms 594.596437ms 596.831259ms 599.212052ms 599.852446ms 600.549748ms 604.369452ms 604.72001ms 606.336241ms 609.517188ms 610.209159ms 611.861605ms 612.553848ms 613.324494ms 615.146926ms 615.936992ms 616.654707ms 619.134239ms 622.735881ms 623.43001ms 623.886818ms 628.771607ms 628.878802ms 629.156774ms 631.983466ms 633.026534ms 633.739513ms 634.311677ms 634.876129ms 635.341852ms 635.402109ms 636.702303ms 637.539142ms 638.403749ms 640.337882ms 640.647219ms 640.651666ms 640.916735ms 641.055895ms 641.274793ms 641.449448ms 642.14941ms 644.612701ms 646.716384ms 647.054654ms 647.069379ms 650.934858ms 651.322187ms 652.095157ms 652.146913ms 652.412224ms 652.473205ms 652.535649ms 652.787371ms 653.087851ms 653.222098ms 655.719637ms 658.325732ms 658.927725ms 658.971635ms 659.227621ms 659.298373ms 659.324044ms 659.477641ms 659.528129ms 659.728861ms 659.926898ms 662.28722ms 663.785085ms 665.16076ms 665.248284ms 666.651582ms 667.190945ms 670.483382ms 670.498443ms 671.145688ms 671.237765ms 671.740639ms 672.366522ms 672.841528ms 673.463096ms 675.444463ms 675.842787ms 676.426276ms 676.514514ms 676.569963ms 677.12031ms 677.540363ms 679.467897ms 680.385384ms 681.675876ms 682.901977ms 683.663511ms 685.099451ms 685.953485ms 687.228988ms 687.772821ms 688.696964ms 688.966064ms 689.305219ms 690.287232ms 691.323488ms 693.078143ms 694.430334ms 695.096373ms 695.374184ms 696.876074ms 699.330347ms 700.917398ms 701.199146ms 702.827895ms 704.811448ms 705.035095ms 707.282014ms 707.314006ms 710.437373ms 710.930698ms 713.205546ms 714.329278ms 715.785459ms 718.358985ms 719.613431ms 721.985382ms 725.013139ms 728.786715ms 731.103322ms 731.788788ms 732.285805ms 736.736345ms 736.909286ms 737.015629ms 739.050482ms 739.164216ms 740.631205ms 742.014639ms 743.367395ms 744.235908ms 746.080269ms 748.655035ms 748.76478ms 748.768706ms 749.122582ms 749.294122ms 750.172061ms 750.804778ms 757.227234ms 758.535504ms 759.048096ms 759.844572ms 760.460823ms 766.599273ms 767.192125ms 770.132045ms 774.847521ms 775.859968ms 783.031123ms 787.746347ms 789.495686ms 791.559035ms 797.911912ms 799.167297ms 804.597523ms 804.92009ms 810.162535ms 810.456486ms 813.188566ms 820.339689ms 820.871266ms 823.668108ms 826.258174ms 826.346294ms 835.749902ms 837.795199ms 844.65223ms 844.691948ms 844.753906ms 844.83343ms 850.81514ms 856.093251ms 861.768732ms 861.823257ms 870.456676ms 873.84678ms 874.287089ms 884.403244ms] Apr 29 19:21:44.888: INFO: 50 %ile: 676.569963ms Apr 29 19:21:44.888: INFO: 90 %ile: 813.188566ms Apr 29 19:21:44.888: INFO: 99 %ile: 874.287089ms Apr 29 19:21:44.888: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:21:44.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4905" for this suite. Apr 29 19:22:04.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:22:04.998: INFO: namespace svc-latency-4905 deletion completed in 20.100784347s • [SLOW TEST:33.748 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:22:04.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 29 19:22:05.050: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 19:22:05.070: INFO: Waiting for terminating namespaces to be deleted... Apr 29 19:22:05.073: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 29 19:22:05.080: INFO: kindnet-7fbjm from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded) Apr 29 19:22:05.080: INFO: Container kindnet-cni ready: true, restart count 0 Apr 29 19:22:05.080: INFO: chaos-daemon-kbww4 from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded) Apr 29 19:22:05.080: INFO: Container chaos-daemon ready: true, restart count 0 Apr 29 19:22:05.080: INFO: kube-proxy-qp6db from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded) Apr 29 19:22:05.080: INFO: Container kube-proxy ready: true, restart count 0 Apr 29 19:22:05.080: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 29 19:22:05.089: INFO: chaos-controller-manager-6c68f56f79-plhrb from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded) Apr 29 19:22:05.089: INFO: Container chaos-mesh ready: true, restart count 0 Apr 29 19:22:05.089: INFO: kindnet-nxsfn from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded) Apr 29 19:22:05.089: INFO: Container kindnet-cni ready: true, restart count 0 Apr 29 19:22:05.089: INFO: kube-proxy-pz4cr from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded) Apr 29 19:22:05.089: INFO: Container kube-proxy ready: true, restart count 0 Apr 29 19:22:05.089: INFO: chaos-daemon-5nrq6 from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded) Apr 29 19:22:05.089: INFO: Container chaos-daemon ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 29 19:22:05.200: INFO: Pod chaos-controller-manager-6c68f56f79-plhrb requesting resource cpu=25m on Node iruya-worker2 Apr 29 19:22:05.201: INFO: Pod chaos-daemon-5nrq6 requesting resource cpu=0m on Node iruya-worker2 Apr 29 19:22:05.201: INFO: Pod chaos-daemon-kbww4 requesting resource cpu=0m on Node iruya-worker Apr 29 19:22:05.201: INFO: Pod kindnet-7fbjm requesting resource cpu=100m on Node iruya-worker Apr 29 19:22:05.201: INFO: Pod kindnet-nxsfn requesting resource cpu=100m on Node iruya-worker2 Apr 29 19:22:05.201: INFO: Pod kube-proxy-pz4cr requesting resource cpu=0m on Node iruya-worker2 Apr 29 19:22:05.201: INFO: Pod kube-proxy-qp6db requesting resource cpu=0m on Node iruya-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-195e8ab5-0608-43e2-ac8e-dc7112f242f9.167a6a829e1fc050], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7353/filler-pod-195e8ab5-0608-43e2-ac8e-dc7112f242f9 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-195e8ab5-0608-43e2-ac8e-dc7112f242f9.167a6a8323cf3f1c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-195e8ab5-0608-43e2-ac8e-dc7112f242f9.167a6a8366b7ca7a], Reason = [Created], Message = [Created container filler-pod-195e8ab5-0608-43e2-ac8e-dc7112f242f9] STEP: Considering event: Type = [Normal], Name = [filler-pod-195e8ab5-0608-43e2-ac8e-dc7112f242f9.167a6a837a685113], Reason = [Started], Message = [Started container filler-pod-195e8ab5-0608-43e2-ac8e-dc7112f242f9] STEP: Considering event: Type = [Normal], Name = [filler-pod-410d3dfd-670e-43f3-84d8-6dc32b612347.167a6a829ef1bf6b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7353/filler-pod-410d3dfd-670e-43f3-84d8-6dc32b612347 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-410d3dfd-670e-43f3-84d8-6dc32b612347.167a6a82ee9a9ffb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-410d3dfd-670e-43f3-84d8-6dc32b612347.167a6a83467f8941], Reason = [Created], Message = [Created container filler-pod-410d3dfd-670e-43f3-84d8-6dc32b612347] STEP: Considering event: Type = [Normal], Name = [filler-pod-410d3dfd-670e-43f3-84d8-6dc32b612347.167a6a835e109b10], Reason = [Started], Message = [Started container filler-pod-410d3dfd-670e-43f3-84d8-6dc32b612347] STEP: Considering event: Type = [Warning], Name = [additional-pod.167a6a840697fea0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:22:12.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7353" for this suite. Apr 29 19:22:18.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:22:18.455: INFO: namespace sched-pred-7353 deletion completed in 6.0960563s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.456 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:22:18.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 29 19:22:18.514: INFO: Creating ReplicaSet my-hostname-basic-12250e97-6835-4f14-9806-970c6f76c9b0 Apr 29 19:22:18.554: INFO: Pod name my-hostname-basic-12250e97-6835-4f14-9806-970c6f76c9b0: Found 1 pods out of 1 Apr 29 19:22:18.554: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-12250e97-6835-4f14-9806-970c6f76c9b0" is running Apr 29 19:22:22.589: INFO: Pod "my-hostname-basic-12250e97-6835-4f14-9806-970c6f76c9b0-nsq8l" is running (conditions: []) Apr 29 19:22:22.589: INFO: Trying to dial the pod Apr 29 19:22:27.640: INFO: Controller my-hostname-basic-12250e97-6835-4f14-9806-970c6f76c9b0: Got expected result from replica 1 [my-hostname-basic-12250e97-6835-4f14-9806-970c6f76c9b0-nsq8l]: "my-hostname-basic-12250e97-6835-4f14-9806-970c6f76c9b0-nsq8l", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:22:27.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2612" for this suite. Apr 29 19:22:33.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:22:33.754: INFO: namespace replicaset-2612 deletion completed in 6.109290151s • [SLOW TEST:15.299 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:22:33.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 29 19:22:33.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 29 19:22:34.025: INFO: stderr: "" Apr 29 19:22:34.025: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:22:34.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3968" for this suite. Apr 29 19:22:40.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:22:40.173: INFO: namespace kubectl-3968 deletion completed in 6.138115206s • [SLOW TEST:6.419 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:22:40.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 29 19:22:40.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 29 19:22:42.746: INFO: stderr: "" Apr 29 19:22:42.747: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:40269\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:40269/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:22:42.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4294" for this suite. Apr 29 19:22:48.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:22:48.857: INFO: namespace kubectl-4294 deletion completed in 6.106559798s • [SLOW TEST:8.683 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:22:48.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 29 19:22:48.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1dd6d2b0-a258-42e5-bbfd-feb46f2f47ed" in namespace "projected-465" to be "success or failure" Apr 29 19:22:48.927: INFO: Pod "downwardapi-volume-1dd6d2b0-a258-42e5-bbfd-feb46f2f47ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.74129ms Apr 29 19:22:50.932: INFO: Pod "downwardapi-volume-1dd6d2b0-a258-42e5-bbfd-feb46f2f47ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008051956s Apr 29 19:22:52.936: INFO: Pod "downwardapi-volume-1dd6d2b0-a258-42e5-bbfd-feb46f2f47ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013025151s Apr 29 19:22:54.941: INFO: Pod "downwardapi-volume-1dd6d2b0-a258-42e5-bbfd-feb46f2f47ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017399678s STEP: Saw pod success Apr 29 19:22:54.941: INFO: Pod "downwardapi-volume-1dd6d2b0-a258-42e5-bbfd-feb46f2f47ed" satisfied condition "success or failure" Apr 29 19:22:54.944: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1dd6d2b0-a258-42e5-bbfd-feb46f2f47ed container client-container: STEP: delete the pod Apr 29 19:22:54.965: INFO: Waiting for pod downwardapi-volume-1dd6d2b0-a258-42e5-bbfd-feb46f2f47ed to disappear Apr 29 19:22:54.981: INFO: Pod downwardapi-volume-1dd6d2b0-a258-42e5-bbfd-feb46f2f47ed no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:22:54.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-465" for this suite. Apr 29 19:23:00.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:23:01.093: INFO: namespace projected-465 deletion completed in 6.108275498s • [SLOW TEST:12.235 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:23:01.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-3a8be6e1-983a-4515-ab50-1b6124a2a493 in namespace container-probe-9035 Apr 29 19:23:05.218: INFO: Started pod busybox-3a8be6e1-983a-4515-ab50-1b6124a2a493 in namespace container-probe-9035 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 19:23:05.221: INFO: Initial restart count of pod busybox-3a8be6e1-983a-4515-ab50-1b6124a2a493 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:27:05.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9035" for this suite. Apr 29 19:27:11.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:27:11.903: INFO: namespace container-probe-9035 deletion completed in 6.111324097s • [SLOW TEST:250.809 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:27:11.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-0437968f-ad3e-42cf-a520-a95363b08d13 STEP: Creating secret with name s-test-opt-upd-9e97964c-cdcc-4047-8f5e-05c28a6e6631 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0437968f-ad3e-42cf-a520-a95363b08d13 STEP: Updating secret s-test-opt-upd-9e97964c-cdcc-4047-8f5e-05c28a6e6631 STEP: Creating secret with name s-test-opt-create-15ebc189-b0ce-42b0-9eea-2e55f6a379da STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:27:20.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9009" for this suite. Apr 29 19:27:42.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:27:42.293: INFO: namespace secrets-9009 deletion completed in 22.11486826s • [SLOW TEST:30.390 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:27:42.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 29 19:27:42.401: INFO: Waiting up to 5m0s for pod "pod-5a960d6d-23ad-4b77-8bba-a7c2287c6562" in namespace "emptydir-6009" to be "success or failure" Apr 29 19:27:42.407: INFO: Pod "pod-5a960d6d-23ad-4b77-8bba-a7c2287c6562": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27929ms Apr 29 19:27:44.412: INFO: Pod "pod-5a960d6d-23ad-4b77-8bba-a7c2287c6562": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010473779s Apr 29 19:27:46.423: INFO: Pod "pod-5a960d6d-23ad-4b77-8bba-a7c2287c6562": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021836334s STEP: Saw pod success Apr 29 19:27:46.423: INFO: Pod "pod-5a960d6d-23ad-4b77-8bba-a7c2287c6562" satisfied condition "success or failure" Apr 29 19:27:46.425: INFO: Trying to get logs from node iruya-worker pod pod-5a960d6d-23ad-4b77-8bba-a7c2287c6562 container test-container: STEP: delete the pod Apr 29 19:27:46.441: INFO: Waiting for pod pod-5a960d6d-23ad-4b77-8bba-a7c2287c6562 to disappear Apr 29 19:27:46.446: INFO: Pod pod-5a960d6d-23ad-4b77-8bba-a7c2287c6562 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:27:46.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6009" for this suite. Apr 29 19:27:52.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:27:52.550: INFO: namespace emptydir-6009 deletion completed in 6.101727836s • [SLOW TEST:10.257 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:27:52.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 29 19:27:52.708: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7495,SelfLink:/api/v1/namespaces/watch-7495/configmaps/e2e-watch-test-resource-version,UID:22b9a5df-4aeb-4df4-a3a0-3ad1d4fe04b8,ResourceVersion:2885226,Generation:0,CreationTimestamp:2021-04-29 19:27:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 29 19:27:52.708: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7495,SelfLink:/api/v1/namespaces/watch-7495/configmaps/e2e-watch-test-resource-version,UID:22b9a5df-4aeb-4df4-a3a0-3ad1d4fe04b8,ResourceVersion:2885227,Generation:0,CreationTimestamp:2021-04-29 19:27:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:27:52.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7495" for this suite. Apr 29 19:27:58.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:27:58.845: INFO: namespace watch-7495 deletion completed in 6.109621225s • [SLOW TEST:6.294 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:27:58.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 29 19:27:58.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8940' Apr 29 19:27:58.993: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 29 19:27:58.993: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 29 19:27:59.022: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-n8wz2] Apr 29 19:27:59.022: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-n8wz2" in namespace "kubectl-8940" to be "running and ready" Apr 29 19:27:59.038: INFO: Pod "e2e-test-nginx-rc-n8wz2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.865675ms Apr 29 19:28:01.043: INFO: Pod "e2e-test-nginx-rc-n8wz2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021152617s Apr 29 19:28:03.047: INFO: Pod "e2e-test-nginx-rc-n8wz2": Phase="Running", Reason="", readiness=true. Elapsed: 4.025329296s Apr 29 19:28:03.047: INFO: Pod "e2e-test-nginx-rc-n8wz2" satisfied condition "running and ready" Apr 29 19:28:03.047: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-n8wz2] Apr 29 19:28:03.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-8940' Apr 29 19:28:03.162: INFO: stderr: "" Apr 29 19:28:03.162: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 29 19:28:03.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8940' Apr 29 19:28:03.261: INFO: stderr: "" Apr 29 19:28:03.261: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:28:03.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8940" for this suite. Apr 29 19:28:09.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:28:09.419: INFO: namespace kubectl-8940 deletion completed in 6.156090586s • [SLOW TEST:10.574 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:28:09.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 29 19:28:14.025: INFO: Successfully updated pod "annotationupdate216eaf1e-9f93-4f36-8581-8d41564880e8" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:28:16.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-673" for this suite. Apr 29 19:28:38.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:28:38.150: INFO: namespace downward-api-673 deletion completed in 22.103118388s • [SLOW TEST:28.729 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:28:38.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-82ca78a1-8cbd-4661-a40d-a9e7c5f74097 Apr 29 19:28:38.243: INFO: Pod name my-hostname-basic-82ca78a1-8cbd-4661-a40d-a9e7c5f74097: Found 0 pods out of 1 Apr 29 19:28:43.256: INFO: Pod name my-hostname-basic-82ca78a1-8cbd-4661-a40d-a9e7c5f74097: Found 1 pods out of 1 Apr 29 19:28:43.256: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-82ca78a1-8cbd-4661-a40d-a9e7c5f74097" are running Apr 29 19:28:43.259: INFO: Pod "my-hostname-basic-82ca78a1-8cbd-4661-a40d-a9e7c5f74097-bmgz4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-04-29 19:28:38 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-04-29 19:28:40 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-04-29 19:28:40 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-04-29 19:28:38 +0000 UTC Reason: Message:}]) Apr 29 19:28:43.259: INFO: Trying to dial the pod Apr 29 19:28:48.274: INFO: Controller my-hostname-basic-82ca78a1-8cbd-4661-a40d-a9e7c5f74097: Got expected result from replica 1 [my-hostname-basic-82ca78a1-8cbd-4661-a40d-a9e7c5f74097-bmgz4]: "my-hostname-basic-82ca78a1-8cbd-4661-a40d-a9e7c5f74097-bmgz4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:28:48.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4980" for this suite. Apr 29 19:28:54.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:28:54.423: INFO: namespace replication-controller-4980 deletion completed in 6.144656292s • [SLOW TEST:16.273 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:28:54.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 29 19:28:54.507: INFO: Waiting up to 5m0s for pod "downwardapi-volume-030efe4b-105f-4b2f-a666-42e73fe90b86" in namespace "projected-8704" to be "success or failure" Apr 29 19:28:54.556: INFO: Pod "downwardapi-volume-030efe4b-105f-4b2f-a666-42e73fe90b86": Phase="Pending", Reason="", readiness=false. Elapsed: 49.323082ms Apr 29 19:28:56.560: INFO: Pod "downwardapi-volume-030efe4b-105f-4b2f-a666-42e73fe90b86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053063379s Apr 29 19:28:58.564: INFO: Pod "downwardapi-volume-030efe4b-105f-4b2f-a666-42e73fe90b86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057481154s STEP: Saw pod success Apr 29 19:28:58.565: INFO: Pod "downwardapi-volume-030efe4b-105f-4b2f-a666-42e73fe90b86" satisfied condition "success or failure" Apr 29 19:28:58.568: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-030efe4b-105f-4b2f-a666-42e73fe90b86 container client-container: STEP: delete the pod Apr 29 19:28:58.624: INFO: Waiting for pod downwardapi-volume-030efe4b-105f-4b2f-a666-42e73fe90b86 to disappear Apr 29 19:28:58.630: INFO: Pod downwardapi-volume-030efe4b-105f-4b2f-a666-42e73fe90b86 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:28:58.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8704" for this suite. Apr 29 19:29:04.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:29:04.732: INFO: namespace projected-8704 deletion completed in 6.098613635s • [SLOW TEST:10.308 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:29:04.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-11ed984b-cf73-4bb5-8562-c9bb268c5aaa STEP: Creating a pod to test consume configMaps Apr 29 19:29:04.850: INFO: Waiting up to 5m0s for pod "pod-configmaps-12516dae-eb7c-4110-8e84-a31a29366647" in namespace "configmap-5553" to be "success or failure" Apr 29 19:29:04.858: INFO: Pod "pod-configmaps-12516dae-eb7c-4110-8e84-a31a29366647": Phase="Pending", Reason="", readiness=false. Elapsed: 8.633857ms Apr 29 19:29:06.879: INFO: Pod "pod-configmaps-12516dae-eb7c-4110-8e84-a31a29366647": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029496119s Apr 29 19:29:08.884: INFO: Pod "pod-configmaps-12516dae-eb7c-4110-8e84-a31a29366647": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033952009s STEP: Saw pod success Apr 29 19:29:08.884: INFO: Pod "pod-configmaps-12516dae-eb7c-4110-8e84-a31a29366647" satisfied condition "success or failure" Apr 29 19:29:08.887: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-12516dae-eb7c-4110-8e84-a31a29366647 container configmap-volume-test: STEP: delete the pod Apr 29 19:29:08.931: INFO: Waiting for pod pod-configmaps-12516dae-eb7c-4110-8e84-a31a29366647 to disappear Apr 29 19:29:08.975: INFO: Pod pod-configmaps-12516dae-eb7c-4110-8e84-a31a29366647 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:29:08.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5553" for this suite. Apr 29 19:29:15.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:29:15.095: INFO: namespace configmap-5553 deletion completed in 6.116339827s • [SLOW TEST:10.363 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:29:15.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 29 19:29:15.199: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:29:19.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6709" for this suite. Apr 29 19:29:59.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:29:59.487: INFO: namespace pods-6709 deletion completed in 40.095529372s • [SLOW TEST:44.391 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:29:59.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 29 19:29:59.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1599' Apr 29 19:29:59.779: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 29 19:29:59.779: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 29 19:30:01.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1599' Apr 29 19:30:01.963: INFO: stderr: "" Apr 29 19:30:01.963: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:30:01.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1599" for this suite. Apr 29 19:32:05.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:32:06.097: INFO: namespace kubectl-1599 deletion completed in 2m4.131092057s • [SLOW TEST:126.610 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:32:06.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-72f96bbc-c379-4685-9cb1-40d7977c0813 STEP: Creating a pod to test consume configMaps Apr 29 19:32:06.194: INFO: Waiting up to 5m0s for pod "pod-configmaps-b9bc02c4-c5a0-439b-b4ff-2d0204464cc6" in namespace "configmap-7690" to be "success or failure" Apr 29 19:32:06.198: INFO: Pod "pod-configmaps-b9bc02c4-c5a0-439b-b4ff-2d0204464cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.422562ms Apr 29 19:32:08.207: INFO: Pod "pod-configmaps-b9bc02c4-c5a0-439b-b4ff-2d0204464cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013039142s Apr 29 19:32:10.212: INFO: Pod "pod-configmaps-b9bc02c4-c5a0-439b-b4ff-2d0204464cc6": Phase="Running", Reason="", readiness=true. Elapsed: 4.017661769s Apr 29 19:32:12.219: INFO: Pod "pod-configmaps-b9bc02c4-c5a0-439b-b4ff-2d0204464cc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024545357s STEP: Saw pod success Apr 29 19:32:12.219: INFO: Pod "pod-configmaps-b9bc02c4-c5a0-439b-b4ff-2d0204464cc6" satisfied condition "success or failure" Apr 29 19:32:12.223: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b9bc02c4-c5a0-439b-b4ff-2d0204464cc6 container configmap-volume-test: STEP: delete the pod Apr 29 19:32:12.255: INFO: Waiting for pod pod-configmaps-b9bc02c4-c5a0-439b-b4ff-2d0204464cc6 to disappear Apr 29 19:32:12.270: INFO: Pod pod-configmaps-b9bc02c4-c5a0-439b-b4ff-2d0204464cc6 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:32:12.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7690" for this suite. Apr 29 19:32:18.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:32:18.381: INFO: namespace configmap-7690 deletion completed in 6.106284123s • [SLOW TEST:12.283 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:32:18.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 29 19:32:18.452: INFO: Waiting up to 5m0s for pod "pod-40e37a03-8770-453d-9bd4-56b0be2c11e2" in namespace "emptydir-4621" to be "success or failure" Apr 29 19:32:18.482: INFO: Pod "pod-40e37a03-8770-453d-9bd4-56b0be2c11e2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.04553ms Apr 29 19:32:20.486: INFO: Pod "pod-40e37a03-8770-453d-9bd4-56b0be2c11e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033230401s Apr 29 19:32:22.598: INFO: Pod "pod-40e37a03-8770-453d-9bd4-56b0be2c11e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.145845931s STEP: Saw pod success Apr 29 19:32:22.598: INFO: Pod "pod-40e37a03-8770-453d-9bd4-56b0be2c11e2" satisfied condition "success or failure" Apr 29 19:32:22.602: INFO: Trying to get logs from node iruya-worker2 pod pod-40e37a03-8770-453d-9bd4-56b0be2c11e2 container test-container: STEP: delete the pod Apr 29 19:32:22.681: INFO: Waiting for pod pod-40e37a03-8770-453d-9bd4-56b0be2c11e2 to disappear Apr 29 19:32:22.696: INFO: Pod pod-40e37a03-8770-453d-9bd4-56b0be2c11e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:32:22.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4621" for this suite. Apr 29 19:32:28.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:32:28.801: INFO: namespace emptydir-4621 deletion completed in 6.101486008s • [SLOW TEST:10.419 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:32:28.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 29 19:32:28.890: INFO: Waiting up to 5m0s for pod "client-containers-3b97bb46-7228-4cf9-a66e-2f15c5cee0b1" in namespace "containers-3505" to be "success or failure" Apr 29 19:32:28.894: INFO: Pod "client-containers-3b97bb46-7228-4cf9-a66e-2f15c5cee0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.589258ms Apr 29 19:32:30.898: INFO: Pod "client-containers-3b97bb46-7228-4cf9-a66e-2f15c5cee0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007354724s Apr 29 19:32:32.902: INFO: Pod "client-containers-3b97bb46-7228-4cf9-a66e-2f15c5cee0b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011814249s STEP: Saw pod success Apr 29 19:32:32.902: INFO: Pod "client-containers-3b97bb46-7228-4cf9-a66e-2f15c5cee0b1" satisfied condition "success or failure" Apr 29 19:32:32.905: INFO: Trying to get logs from node iruya-worker pod client-containers-3b97bb46-7228-4cf9-a66e-2f15c5cee0b1 container test-container: STEP: delete the pod Apr 29 19:32:32.938: INFO: Waiting for pod client-containers-3b97bb46-7228-4cf9-a66e-2f15c5cee0b1 to disappear Apr 29 19:32:32.966: INFO: Pod client-containers-3b97bb46-7228-4cf9-a66e-2f15c5cee0b1 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:32:32.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3505" for this suite. Apr 29 19:32:38.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:32:39.074: INFO: namespace containers-3505 deletion completed in 6.104291817s • [SLOW TEST:10.273 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:32:39.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 29 19:32:39.181: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 29 19:32:44.185: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:32:45.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6208" for this suite. Apr 29 19:32:51.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:32:51.441: INFO: namespace replication-controller-6208 deletion completed in 6.232210137s • [SLOW TEST:12.367 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:32:51.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 29 19:32:51.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 29 19:32:51.680: INFO: stderr: "" Apr 29 19:32:51.680: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2021-01-22T21:57:01Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:32:51.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-506" for this suite. Apr 29 19:32:57.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:32:57.784: INFO: namespace kubectl-506 deletion completed in 6.09906311s • [SLOW TEST:6.342 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:32:57.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 29 19:33:01.851: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-f7477ece-2266-4749-b6b1-53ffa72521ca,GenerateName:,Namespace:events-972,SelfLink:/api/v1/namespaces/events-972/pods/send-events-f7477ece-2266-4749-b6b1-53ffa72521ca,UID:eb0347d3-09a8-4992-b32d-764290e20e00,ResourceVersion:2886167,Generation:0,CreationTimestamp:2021-04-29 19:32:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 818980970,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7mbr4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7mbr4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-7mbr4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003682f60} {node.kubernetes.io/unreachable Exists NoExecute 0xc003682f80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:32:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:33:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:33:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:32:57 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.63,StartTime:2021-04-29 19:32:57 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2021-04-29 19:33:00 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://06e1b707c5fc85a99813f2a10ebf021b40f4099ddb5d19d6e4264d7174992ae7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 29 19:33:03.856: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 29 19:33:05.860: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:33:05.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-972" for this suite. Apr 29 19:33:51.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:33:52.027: INFO: namespace events-972 deletion completed in 46.11285225s • [SLOW TEST:54.243 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:33:52.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 29 19:33:52.130: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2238,SelfLink:/api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-watch-closed,UID:c26ac33d-264f-41cc-9c9b-d68cda658d8e,ResourceVersion:2886285,Generation:0,CreationTimestamp:2021-04-29 19:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 29 19:33:52.131: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2238,SelfLink:/api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-watch-closed,UID:c26ac33d-264f-41cc-9c9b-d68cda658d8e,ResourceVersion:2886286,Generation:0,CreationTimestamp:2021-04-29 19:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 29 19:33:52.145: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2238,SelfLink:/api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-watch-closed,UID:c26ac33d-264f-41cc-9c9b-d68cda658d8e,ResourceVersion:2886287,Generation:0,CreationTimestamp:2021-04-29 19:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 29 19:33:52.145: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-2238,SelfLink:/api/v1/namespaces/watch-2238/configmaps/e2e-watch-test-watch-closed,UID:c26ac33d-264f-41cc-9c9b-d68cda658d8e,ResourceVersion:2886288,Generation:0,CreationTimestamp:2021-04-29 19:33:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:33:52.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2238" for this suite. Apr 29 19:33:58.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:33:58.268: INFO: namespace watch-2238 deletion completed in 6.120336219s • [SLOW TEST:6.241 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:33:58.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 29 19:33:58.320: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:34:02.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6072" for this suite. Apr 29 19:34:52.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:34:52.465: INFO: namespace pods-6072 deletion completed in 50.091895348s • [SLOW TEST:54.197 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:34:52.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 29 19:34:56.551: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 29 19:35:11.674: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:35:11.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7649" for this suite. Apr 29 19:35:17.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:35:17.827: INFO: namespace pods-7649 deletion completed in 6.139797234s • [SLOW TEST:25.362 seconds] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:35:17.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 29 19:35:21.935: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:35:21.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6586" for this suite. Apr 29 19:35:27.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:35:28.086: INFO: namespace container-runtime-6586 deletion completed in 6.132468553s • [SLOW TEST:10.259 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:35:28.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e132e49a-a8ee-422e-be9a-92b97f6304eb STEP: Creating a pod to test consume secrets Apr 29 19:35:28.251: INFO: Waiting up to 5m0s for pod "pod-secrets-cb049a0c-3d7b-42b2-bd3c-12bc8bd5610c" in namespace "secrets-1228" to be "success or failure" Apr 29 19:35:28.263: INFO: Pod "pod-secrets-cb049a0c-3d7b-42b2-bd3c-12bc8bd5610c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.9177ms Apr 29 19:35:30.267: INFO: Pod "pod-secrets-cb049a0c-3d7b-42b2-bd3c-12bc8bd5610c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015791539s Apr 29 19:35:32.271: INFO: Pod "pod-secrets-cb049a0c-3d7b-42b2-bd3c-12bc8bd5610c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019504028s STEP: Saw pod success Apr 29 19:35:32.271: INFO: Pod "pod-secrets-cb049a0c-3d7b-42b2-bd3c-12bc8bd5610c" satisfied condition "success or failure" Apr 29 19:35:32.273: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-cb049a0c-3d7b-42b2-bd3c-12bc8bd5610c container secret-volume-test: STEP: delete the pod Apr 29 19:35:32.294: INFO: Waiting for pod pod-secrets-cb049a0c-3d7b-42b2-bd3c-12bc8bd5610c to disappear Apr 29 19:35:32.311: INFO: Pod pod-secrets-cb049a0c-3d7b-42b2-bd3c-12bc8bd5610c no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:35:32.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1228" for this suite. Apr 29 19:35:38.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:35:38.411: INFO: namespace secrets-1228 deletion completed in 6.095912168s • [SLOW TEST:10.324 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:35:38.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 29 19:35:38.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8968' Apr 29 19:35:41.084: INFO: stderr: "" Apr 29 19:35:41.084: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 29 19:35:41.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8968' Apr 29 19:35:49.226: INFO: stderr: "" Apr 29 19:35:49.226: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:35:49.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8968" for this suite. Apr 29 19:35:55.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:35:55.347: INFO: namespace kubectl-8968 deletion completed in 6.100090941s • [SLOW TEST:16.936 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:35:55.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0429 19:36:05.422832 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 29 19:36:05.422: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:36:05.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9028" for this suite. Apr 29 19:36:11.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:36:11.537: INFO: namespace gc-9028 deletion completed in 6.111397101s • [SLOW TEST:16.189 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:36:11.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-771301ee-ec54-4883-85d6-64cf04340dd4 STEP: Creating a pod to test consume secrets Apr 29 19:36:11.613: INFO: Waiting up to 5m0s for pod "pod-secrets-c52c091d-7337-4429-961f-ae3dd6db8d55" in namespace "secrets-5750" to be "success or failure" Apr 29 19:36:11.629: INFO: Pod "pod-secrets-c52c091d-7337-4429-961f-ae3dd6db8d55": Phase="Pending", Reason="", readiness=false. Elapsed: 16.012061ms Apr 29 19:36:13.633: INFO: Pod "pod-secrets-c52c091d-7337-4429-961f-ae3dd6db8d55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019778709s Apr 29 19:36:15.637: INFO: Pod "pod-secrets-c52c091d-7337-4429-961f-ae3dd6db8d55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023397435s Apr 29 19:36:17.641: INFO: Pod "pod-secrets-c52c091d-7337-4429-961f-ae3dd6db8d55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027539615s Apr 29 19:36:19.644: INFO: Pod "pod-secrets-c52c091d-7337-4429-961f-ae3dd6db8d55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.031247918s STEP: Saw pod success Apr 29 19:36:19.644: INFO: Pod "pod-secrets-c52c091d-7337-4429-961f-ae3dd6db8d55" satisfied condition "success or failure" Apr 29 19:36:19.647: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c52c091d-7337-4429-961f-ae3dd6db8d55 container secret-volume-test: STEP: delete the pod Apr 29 19:36:19.685: INFO: Waiting for pod pod-secrets-c52c091d-7337-4429-961f-ae3dd6db8d55 to disappear Apr 29 19:36:19.689: INFO: Pod pod-secrets-c52c091d-7337-4429-961f-ae3dd6db8d55 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:36:19.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5750" for this suite. Apr 29 19:36:25.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:36:25.797: INFO: namespace secrets-5750 deletion completed in 6.103994414s • [SLOW TEST:14.261 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:36:25.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 29 19:36:25.889: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a545e151-1f0b-45d2-aa76-a66328b48070" in namespace "downward-api-2954" to be "success or failure" Apr 29 19:36:25.898: INFO: Pod "downwardapi-volume-a545e151-1f0b-45d2-aa76-a66328b48070": Phase="Pending", Reason="", readiness=false. Elapsed: 9.065052ms Apr 29 19:36:27.904: INFO: Pod "downwardapi-volume-a545e151-1f0b-45d2-aa76-a66328b48070": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015353119s Apr 29 19:36:29.908: INFO: Pod "downwardapi-volume-a545e151-1f0b-45d2-aa76-a66328b48070": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019170138s STEP: Saw pod success Apr 29 19:36:29.908: INFO: Pod "downwardapi-volume-a545e151-1f0b-45d2-aa76-a66328b48070" satisfied condition "success or failure" Apr 29 19:36:29.916: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a545e151-1f0b-45d2-aa76-a66328b48070 container client-container: STEP: delete the pod Apr 29 19:36:29.930: INFO: Waiting for pod downwardapi-volume-a545e151-1f0b-45d2-aa76-a66328b48070 to disappear Apr 29 19:36:29.934: INFO: Pod downwardapi-volume-a545e151-1f0b-45d2-aa76-a66328b48070 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:36:29.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2954" for this suite. Apr 29 19:36:35.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:36:36.067: INFO: namespace downward-api-2954 deletion completed in 6.130937294s • [SLOW TEST:10.269 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:36:36.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 29 19:36:36.127: INFO: Waiting up to 5m0s for pod "var-expansion-ea4bf527-71b8-4008-a429-afcf5a917344" in namespace "var-expansion-7388" to be "success or failure" Apr 29 19:36:36.138: INFO: Pod "var-expansion-ea4bf527-71b8-4008-a429-afcf5a917344": Phase="Pending", Reason="", readiness=false. Elapsed: 10.005747ms Apr 29 19:36:38.162: INFO: Pod "var-expansion-ea4bf527-71b8-4008-a429-afcf5a917344": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034448716s Apr 29 19:36:40.167: INFO: Pod "var-expansion-ea4bf527-71b8-4008-a429-afcf5a917344": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039770688s STEP: Saw pod success Apr 29 19:36:40.167: INFO: Pod "var-expansion-ea4bf527-71b8-4008-a429-afcf5a917344" satisfied condition "success or failure" Apr 29 19:36:40.169: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-ea4bf527-71b8-4008-a429-afcf5a917344 container dapi-container: STEP: delete the pod Apr 29 19:36:40.187: INFO: Waiting for pod var-expansion-ea4bf527-71b8-4008-a429-afcf5a917344 to disappear Apr 29 19:36:40.191: INFO: Pod var-expansion-ea4bf527-71b8-4008-a429-afcf5a917344 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:36:40.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7388" for this suite. Apr 29 19:36:46.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:36:46.324: INFO: namespace var-expansion-7388 deletion completed in 6.12929944s • [SLOW TEST:10.257 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:36:46.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 29 19:36:46.375: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 29 19:36:46.848: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 29 19:36:49.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755321806, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755321806, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755321806, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755321806, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 19:36:51.065: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755321806, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755321806, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755321806, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755321806, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 29 19:36:53.793: INFO: Waited 722.813197ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:36:54.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7548" for this suite. Apr 29 19:37:00.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:37:00.493: INFO: namespace aggregator-7548 deletion completed in 6.204007106s • [SLOW TEST:14.168 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:37:00.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-d99980c4-d0ef-4c29-bed8-a0656ec1160c STEP: Creating a pod to test consume configMaps Apr 29 19:37:00.590: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8b8d6e13-9e13-4286-bb1f-6fa69e77a830" in namespace "projected-5275" to be "success or failure" Apr 29 19:37:00.609: INFO: Pod "pod-projected-configmaps-8b8d6e13-9e13-4286-bb1f-6fa69e77a830": Phase="Pending", Reason="", readiness=false. Elapsed: 19.608859ms Apr 29 19:37:02.613: INFO: Pod "pod-projected-configmaps-8b8d6e13-9e13-4286-bb1f-6fa69e77a830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022990556s Apr 29 19:37:04.617: INFO: Pod "pod-projected-configmaps-8b8d6e13-9e13-4286-bb1f-6fa69e77a830": Phase="Running", Reason="", readiness=true. Elapsed: 4.027358023s Apr 29 19:37:06.621: INFO: Pod "pod-projected-configmaps-8b8d6e13-9e13-4286-bb1f-6fa69e77a830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03099088s STEP: Saw pod success Apr 29 19:37:06.621: INFO: Pod "pod-projected-configmaps-8b8d6e13-9e13-4286-bb1f-6fa69e77a830" satisfied condition "success or failure" Apr 29 19:37:06.623: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-8b8d6e13-9e13-4286-bb1f-6fa69e77a830 container projected-configmap-volume-test: STEP: delete the pod Apr 29 19:37:06.651: INFO: Waiting for pod pod-projected-configmaps-8b8d6e13-9e13-4286-bb1f-6fa69e77a830 to disappear Apr 29 19:37:06.665: INFO: Pod pod-projected-configmaps-8b8d6e13-9e13-4286-bb1f-6fa69e77a830 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:37:06.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5275" for this suite. Apr 29 19:37:12.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:37:12.795: INFO: namespace projected-5275 deletion completed in 6.125508245s • [SLOW TEST:12.302 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:37:12.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 29 19:37:12.826: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 29 19:37:20.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2735" for this suite. Apr 29 19:37:26.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 29 19:37:26.574: INFO: namespace init-container-2735 deletion completed in 6.115970123s • [SLOW TEST:13.780 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 29 19:37:26.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 29 19:37:26.704: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 29 19:37:32.962: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Apr 29 19:37:37.967: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Apr 29 19:37:37.967: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Apr 29 19:37:37.990: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2431,SelfLink:/apis/apps/v1/namespaces/deployment-2431/deployments/test-cleanup-deployment,UID:df7b44b1-7dfe-4e6f-a328-e4f3c38f30f5,ResourceVersion:2887139,Generation:1,CreationTimestamp:2021-04-29 19:37:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Apr 29 19:37:37.996: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2431,SelfLink:/apis/apps/v1/namespaces/deployment-2431/replicasets/test-cleanup-deployment-55bbcbc84c,UID:fbcf6e4b-8753-4957-bdbe-e69df2640baa,ResourceVersion:2887141,Generation:1,CreationTimestamp:2021-04-29 19:37:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment df7b44b1-7dfe-4e6f-a328-e4f3c38f30f5 0xc003471bc7 0xc003471bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Apr 29 19:37:37.996: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Apr 29 19:37:37.996: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2431,SelfLink:/apis/apps/v1/namespaces/deployment-2431/replicasets/test-cleanup-controller,UID:a213558f-be8c-4920-b5ac-36fb8f27d3a8,ResourceVersion:2887140,Generation:1,CreationTimestamp:2021-04-29 19:37:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment df7b44b1-7dfe-4e6f-a328-e4f3c38f30f5 0xc003471af7 0xc003471af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Apr 29 19:37:38.046: INFO: Pod "test-cleanup-controller-nlvpq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-nlvpq,GenerateName:test-cleanup-controller-,Namespace:deployment-2431,SelfLink:/api/v1/namespaces/deployment-2431/pods/test-cleanup-controller-nlvpq,UID:30e5dda7-d813-434b-be46-63fa34d4c7db,ResourceVersion:2887135,Generation:0,CreationTimestamp:2021-04-29 19:37:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller a213558f-be8c-4920-b5ac-36fb8f27d3a8 0xc002abac07 0xc002abac08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cdln5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cdln5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cdln5 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002abac80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002abaca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:37:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:37:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:37:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:37:32 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.71,StartTime:2021-04-29 19:37:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-29 19:37:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://11a44cdabd816d63d5bbf822e538a01b072fa04512f01f0bd746ce556a847df3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 19:37:38.046: INFO: Pod "test-cleanup-deployment-55bbcbc84c-rx2vv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-rx2vv,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2431,SelfLink:/api/v1/namespaces/deployment-2431/pods/test-cleanup-deployment-55bbcbc84c-rx2vv,UID:93ec5573-84fa-4cad-a74d-b0c2be0b4c35,ResourceVersion:2887147,Generation:0,CreationTimestamp:2021-04-29 19:37:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c fbcf6e4b-8753-4957-bdbe-e69df2640baa 0xc002abad87 0xc002abad88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cdln5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cdln5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-cdln5 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002abae00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002abae20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:37:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:37:38.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2431" for this suite.
Apr 29 19:37:44.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:37:44.221: INFO: namespace deployment-2431 deletion completed in 6.133383646s

• [SLOW TEST:11.332 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:37:44.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Apr 29 19:37:44.286: INFO: Waiting up to 5m0s for pod "pod-408bf377-faff-4824-a547-510162a099bd" in namespace "emptydir-2900" to be "success or failure"
Apr 29 19:37:44.296: INFO: Pod "pod-408bf377-faff-4824-a547-510162a099bd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.108416ms
Apr 29 19:37:46.385: INFO: Pod "pod-408bf377-faff-4824-a547-510162a099bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098855375s
Apr 29 19:37:48.390: INFO: Pod "pod-408bf377-faff-4824-a547-510162a099bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103248958s
STEP: Saw pod success
Apr 29 19:37:48.390: INFO: Pod "pod-408bf377-faff-4824-a547-510162a099bd" satisfied condition "success or failure"
Apr 29 19:37:48.393: INFO: Trying to get logs from node iruya-worker pod pod-408bf377-faff-4824-a547-510162a099bd container test-container: 
STEP: delete the pod
Apr 29 19:37:48.428: INFO: Waiting for pod pod-408bf377-faff-4824-a547-510162a099bd to disappear
Apr 29 19:37:48.434: INFO: Pod pod-408bf377-faff-4824-a547-510162a099bd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:37:48.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2900" for this suite.
Apr 29 19:37:54.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:37:54.543: INFO: namespace emptydir-2900 deletion completed in 6.105109468s

• [SLOW TEST:10.323 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:37:54.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-7c5971a9-409e-44cb-ba65-6f0150d570ae
STEP: Creating a pod to test consume configMaps
Apr 29 19:37:54.629: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a622b3f-c474-4948-bd87-de29422f7c06" in namespace "projected-5754" to be "success or failure"
Apr 29 19:37:54.648: INFO: Pod "pod-projected-configmaps-1a622b3f-c474-4948-bd87-de29422f7c06": Phase="Pending", Reason="", readiness=false. Elapsed: 19.672334ms
Apr 29 19:37:56.847: INFO: Pod "pod-projected-configmaps-1a622b3f-c474-4948-bd87-de29422f7c06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218047697s
Apr 29 19:37:58.851: INFO: Pod "pod-projected-configmaps-1a622b3f-c474-4948-bd87-de29422f7c06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.222242455s
STEP: Saw pod success
Apr 29 19:37:58.851: INFO: Pod "pod-projected-configmaps-1a622b3f-c474-4948-bd87-de29422f7c06" satisfied condition "success or failure"
Apr 29 19:37:58.854: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-1a622b3f-c474-4948-bd87-de29422f7c06 container projected-configmap-volume-test: 
STEP: delete the pod
Apr 29 19:37:58.912: INFO: Waiting for pod pod-projected-configmaps-1a622b3f-c474-4948-bd87-de29422f7c06 to disappear
Apr 29 19:37:58.996: INFO: Pod pod-projected-configmaps-1a622b3f-c474-4948-bd87-de29422f7c06 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:37:58.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5754" for this suite.
Apr 29 19:38:05.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:38:05.109: INFO: namespace projected-5754 deletion completed in 6.108624343s

• [SLOW TEST:10.565 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:38:05.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7131
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 19:38:05.166: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Apr 29 19:38:33.347: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7131 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 19:38:33.347: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 19:38:34.538: INFO: Found all expected endpoints: [netserver-0]
Apr 29 19:38:34.541: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.240 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7131 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 19:38:34.541: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 19:38:35.690: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:38:35.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7131" for this suite.
Apr 29 19:38:57.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:38:57.809: INFO: namespace pod-network-test-7131 deletion completed in 22.11305502s

• [SLOW TEST:52.699 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:38:57.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 29 19:38:57.871: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Apr 29 19:38:57.895: INFO: Number of nodes with available pods: 0
Apr 29 19:38:57.895: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Apr 29 19:38:57.961: INFO: Number of nodes with available pods: 0
Apr 29 19:38:57.961: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:38:58.966: INFO: Number of nodes with available pods: 0
Apr 29 19:38:58.966: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:38:59.965: INFO: Number of nodes with available pods: 0
Apr 29 19:38:59.965: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:39:00.965: INFO: Number of nodes with available pods: 1
Apr 29 19:39:00.965: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Apr 29 19:39:00.990: INFO: Number of nodes with available pods: 1
Apr 29 19:39:00.990: INFO: Number of running nodes: 0, number of available pods: 1
Apr 29 19:39:01.994: INFO: Number of nodes with available pods: 0
Apr 29 19:39:01.994: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Apr 29 19:39:02.003: INFO: Number of nodes with available pods: 0
Apr 29 19:39:02.003: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:39:03.123: INFO: Number of nodes with available pods: 0
Apr 29 19:39:03.123: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:39:04.007: INFO: Number of nodes with available pods: 0
Apr 29 19:39:04.007: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:39:05.009: INFO: Number of nodes with available pods: 0
Apr 29 19:39:05.009: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:39:06.008: INFO: Number of nodes with available pods: 0
Apr 29 19:39:06.008: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:39:07.040: INFO: Number of nodes with available pods: 0
Apr 29 19:39:07.040: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:39:08.008: INFO: Number of nodes with available pods: 0
Apr 29 19:39:08.008: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:39:09.007: INFO: Number of nodes with available pods: 1
Apr 29 19:39:09.007: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2594, will wait for the garbage collector to delete the pods
Apr 29 19:39:09.082: INFO: Deleting DaemonSet.extensions daemon-set took: 16.685818ms
Apr 29 19:39:09.382: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.225198ms
Apr 29 19:39:13.799: INFO: Number of nodes with available pods: 0
Apr 29 19:39:13.800: INFO: Number of running nodes: 0, number of available pods: 0
Apr 29 19:39:13.802: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2594/daemonsets","resourceVersion":"2887533"},"items":null}

Apr 29 19:39:13.805: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2594/pods","resourceVersion":"2887533"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:39:13.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2594" for this suite.
Apr 29 19:39:19.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:39:19.947: INFO: namespace daemonsets-2594 deletion completed in 6.105413918s

• [SLOW TEST:22.139 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:39:19.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9668.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9668.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 19:39:26.088: INFO: DNS probes using dns-test-be81aee4-5ae3-4e15-a764-643ffdddd93c succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9668.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9668.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 19:39:32.199: INFO: File wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local from pod  dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 19:39:32.202: INFO: File jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local from pod  dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 19:39:32.202: INFO: Lookups using dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 failed for: [wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local]

Apr 29 19:39:37.210: INFO: File wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local from pod  dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 19:39:37.214: INFO: File jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local from pod  dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 19:39:37.214: INFO: Lookups using dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 failed for: [wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local]

Apr 29 19:39:42.207: INFO: File wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local from pod  dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 19:39:42.210: INFO: File jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local from pod  dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 19:39:42.210: INFO: Lookups using dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 failed for: [wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local]

Apr 29 19:39:47.215: INFO: File wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local from pod  dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 19:39:47.218: INFO: File jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local from pod  dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 19:39:47.219: INFO: Lookups using dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 failed for: [wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local]

Apr 29 19:39:52.207: INFO: File wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local from pod  dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 19:39:52.211: INFO: File jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local from pod  dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 contains 'foo.example.com.
' instead of 'bar.example.com.'
Apr 29 19:39:52.211: INFO: Lookups using dns-9668/dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 failed for: [wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local]

Apr 29 19:39:57.211: INFO: DNS probes using dns-test-16d2ac64-5dbd-40ac-9d84-b440a9b1b010 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9668.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9668.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9668.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9668.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 19:40:03.423: INFO: DNS probes using dns-test-ef9625c9-5ad9-41ef-ba27-2f604452a5c3 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:40:03.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9668" for this suite.
Apr 29 19:40:09.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:40:09.615: INFO: namespace dns-9668 deletion completed in 6.124636374s

• [SLOW TEST:49.668 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:40:09.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Apr 29 19:40:09.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7687'
Apr 29 19:40:09.768: INFO: stderr: ""
Apr 29 19:40:09.768: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Apr 29 19:40:14.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7687 -o json'
Apr 29 19:40:14.913: INFO: stderr: ""
Apr 29 19:40:14.913: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2021-04-29T19:40:09Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-7687\",\n        \"resourceVersion\": \"2887794\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-7687/pods/e2e-test-nginx-pod\",\n        \"uid\": \"f1d593fa-66dc-4100-b10e-cd9434292a9b\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-8hw25\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-8hw25\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-8hw25\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-04-29T19:40:09Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-04-29T19:40:12Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-04-29T19:40:12Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-04-29T19:40:09Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://5acf6f3aba06221ce3b0f6b3784ef937c1581763bb5901933b130423ac35e024\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2021-04-29T19:40:12Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.3\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.244\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2021-04-29T19:40:09Z\"\n    }\n}\n"
STEP: replace the image in the pod
Apr 29 19:40:14.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7687'
Apr 29 19:40:15.336: INFO: stderr: ""
Apr 29 19:40:15.336: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Apr 29 19:40:15.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7687'
Apr 29 19:40:29.245: INFO: stderr: ""
Apr 29 19:40:29.245: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:40:29.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7687" for this suite.
Apr 29 19:40:35.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:40:35.395: INFO: namespace kubectl-7687 deletion completed in 6.128863512s

• [SLOW TEST:25.779 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:40:35.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Apr 29 19:40:35.459: INFO: Waiting up to 5m0s for pod "pod-73aa4f7f-048c-4e59-af0e-ed9252d6431b" in namespace "emptydir-9183" to be "success or failure"
Apr 29 19:40:35.508: INFO: Pod "pod-73aa4f7f-048c-4e59-af0e-ed9252d6431b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.384659ms
Apr 29 19:40:37.808: INFO: Pod "pod-73aa4f7f-048c-4e59-af0e-ed9252d6431b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348960244s
Apr 29 19:40:39.812: INFO: Pod "pod-73aa4f7f-048c-4e59-af0e-ed9252d6431b": Phase="Running", Reason="", readiness=true. Elapsed: 4.353181096s
Apr 29 19:40:41.815: INFO: Pod "pod-73aa4f7f-048c-4e59-af0e-ed9252d6431b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.356828184s
STEP: Saw pod success
Apr 29 19:40:41.816: INFO: Pod "pod-73aa4f7f-048c-4e59-af0e-ed9252d6431b" satisfied condition "success or failure"
Apr 29 19:40:41.818: INFO: Trying to get logs from node iruya-worker2 pod pod-73aa4f7f-048c-4e59-af0e-ed9252d6431b container test-container: 
STEP: delete the pod
Apr 29 19:40:41.850: INFO: Waiting for pod pod-73aa4f7f-048c-4e59-af0e-ed9252d6431b to disappear
Apr 29 19:40:41.879: INFO: Pod pod-73aa4f7f-048c-4e59-af0e-ed9252d6431b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:40:41.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9183" for this suite.
Apr 29 19:40:47.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:40:47.996: INFO: namespace emptydir-9183 deletion completed in 6.111503874s

• [SLOW TEST:12.600 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:40:47.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-6734a218-9146-4fb6-b6a8-6dc015f8d8cb
STEP: Creating a pod to test consume configMaps
Apr 29 19:40:48.089: INFO: Waiting up to 5m0s for pod "pod-configmaps-6987d22e-e22d-4d2d-91dd-35d7111ef144" in namespace "configmap-6505" to be "success or failure"
Apr 29 19:40:48.115: INFO: Pod "pod-configmaps-6987d22e-e22d-4d2d-91dd-35d7111ef144": Phase="Pending", Reason="", readiness=false. Elapsed: 25.705067ms
Apr 29 19:40:50.118: INFO: Pod "pod-configmaps-6987d22e-e22d-4d2d-91dd-35d7111ef144": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029079599s
Apr 29 19:40:52.122: INFO: Pod "pod-configmaps-6987d22e-e22d-4d2d-91dd-35d7111ef144": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033462457s
STEP: Saw pod success
Apr 29 19:40:52.123: INFO: Pod "pod-configmaps-6987d22e-e22d-4d2d-91dd-35d7111ef144" satisfied condition "success or failure"
Apr 29 19:40:52.126: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-6987d22e-e22d-4d2d-91dd-35d7111ef144 container configmap-volume-test: 
STEP: delete the pod
Apr 29 19:40:52.331: INFO: Waiting for pod pod-configmaps-6987d22e-e22d-4d2d-91dd-35d7111ef144 to disappear
Apr 29 19:40:52.340: INFO: Pod pod-configmaps-6987d22e-e22d-4d2d-91dd-35d7111ef144 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:40:52.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6505" for this suite.
Apr 29 19:40:58.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:40:58.469: INFO: namespace configmap-6505 deletion completed in 6.125543364s

• [SLOW TEST:10.473 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:40:58.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 29 19:40:58.551: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a7c54d7-86db-47dd-958c-84ccb05b3054" in namespace "downward-api-2195" to be "success or failure"
Apr 29 19:40:58.598: INFO: Pod "downwardapi-volume-6a7c54d7-86db-47dd-958c-84ccb05b3054": Phase="Pending", Reason="", readiness=false. Elapsed: 47.601124ms
Apr 29 19:41:00.602: INFO: Pod "downwardapi-volume-6a7c54d7-86db-47dd-958c-84ccb05b3054": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051450295s
Apr 29 19:41:02.644: INFO: Pod "downwardapi-volume-6a7c54d7-86db-47dd-958c-84ccb05b3054": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092623292s
STEP: Saw pod success
Apr 29 19:41:02.644: INFO: Pod "downwardapi-volume-6a7c54d7-86db-47dd-958c-84ccb05b3054" satisfied condition "success or failure"
Apr 29 19:41:02.646: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6a7c54d7-86db-47dd-958c-84ccb05b3054 container client-container: 
STEP: delete the pod
Apr 29 19:41:02.682: INFO: Waiting for pod downwardapi-volume-6a7c54d7-86db-47dd-958c-84ccb05b3054 to disappear
Apr 29 19:41:02.704: INFO: Pod downwardapi-volume-6a7c54d7-86db-47dd-958c-84ccb05b3054 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:41:02.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2195" for this suite.
Apr 29 19:41:08.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:41:08.834: INFO: namespace downward-api-2195 deletion completed in 6.125440106s

• [SLOW TEST:10.365 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:41:08.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-0c102476-1489-4601-875c-7db48c8b7a66
STEP: Creating a pod to test consume secrets
Apr 29 19:41:08.929: INFO: Waiting up to 5m0s for pod "pod-secrets-20a919ea-6ad1-4e84-ae35-06ed6b33691b" in namespace "secrets-1849" to be "success or failure"
Apr 29 19:41:08.937: INFO: Pod "pod-secrets-20a919ea-6ad1-4e84-ae35-06ed6b33691b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.512021ms
Apr 29 19:41:10.942: INFO: Pod "pod-secrets-20a919ea-6ad1-4e84-ae35-06ed6b33691b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012905429s
Apr 29 19:41:12.946: INFO: Pod "pod-secrets-20a919ea-6ad1-4e84-ae35-06ed6b33691b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016956208s
STEP: Saw pod success
Apr 29 19:41:12.946: INFO: Pod "pod-secrets-20a919ea-6ad1-4e84-ae35-06ed6b33691b" satisfied condition "success or failure"
Apr 29 19:41:12.949: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-20a919ea-6ad1-4e84-ae35-06ed6b33691b container secret-volume-test: 
STEP: delete the pod
Apr 29 19:41:12.965: INFO: Waiting for pod pod-secrets-20a919ea-6ad1-4e84-ae35-06ed6b33691b to disappear
Apr 29 19:41:12.976: INFO: Pod pod-secrets-20a919ea-6ad1-4e84-ae35-06ed6b33691b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:41:12.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1849" for this suite.
Apr 29 19:41:19.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:41:19.091: INFO: namespace secrets-1849 deletion completed in 6.111399715s

• [SLOW TEST:10.256 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:41:19.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-68fe9add-5875-4c5e-8823-c610f67f8088 in namespace container-probe-1588
Apr 29 19:41:23.233: INFO: Started pod liveness-68fe9add-5875-4c5e-8823-c610f67f8088 in namespace container-probe-1588
STEP: checking the pod's current state and verifying that restartCount is present
Apr 29 19:41:23.236: INFO: Initial restart count of pod liveness-68fe9add-5875-4c5e-8823-c610f67f8088 is 0
Apr 29 19:41:41.279: INFO: Restart count of pod container-probe-1588/liveness-68fe9add-5875-4c5e-8823-c610f67f8088 is now 1 (18.043469373s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:41:41.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1588" for this suite.
Apr 29 19:41:47.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:41:47.426: INFO: namespace container-probe-1588 deletion completed in 6.104275063s

• [SLOW TEST:28.334 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:41:47.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-5dfcb1ec-391d-4fce-8e24-f726e52f2cce in namespace container-probe-3769
Apr 29 19:41:51.598: INFO: Started pod liveness-5dfcb1ec-391d-4fce-8e24-f726e52f2cce in namespace container-probe-3769
STEP: checking the pod's current state and verifying that restartCount is present
Apr 29 19:41:51.600: INFO: Initial restart count of pod liveness-5dfcb1ec-391d-4fce-8e24-f726e52f2cce is 0
Apr 29 19:42:09.651: INFO: Restart count of pod container-probe-3769/liveness-5dfcb1ec-391d-4fce-8e24-f726e52f2cce is now 1 (18.050614137s elapsed)
Apr 29 19:42:29.694: INFO: Restart count of pod container-probe-3769/liveness-5dfcb1ec-391d-4fce-8e24-f726e52f2cce is now 2 (38.093708962s elapsed)
Apr 29 19:42:49.735: INFO: Restart count of pod container-probe-3769/liveness-5dfcb1ec-391d-4fce-8e24-f726e52f2cce is now 3 (58.134695742s elapsed)
Apr 29 19:43:09.784: INFO: Restart count of pod container-probe-3769/liveness-5dfcb1ec-391d-4fce-8e24-f726e52f2cce is now 4 (1m18.183294197s elapsed)
Apr 29 19:44:21.945: INFO: Restart count of pod container-probe-3769/liveness-5dfcb1ec-391d-4fce-8e24-f726e52f2cce is now 5 (2m30.34490522s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:44:21.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3769" for this suite.
Apr 29 19:44:27.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:44:28.101: INFO: namespace container-probe-3769 deletion completed in 6.123391045s

• [SLOW TEST:160.675 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:44:28.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-65df
STEP: Creating a pod to test atomic-volume-subpath
Apr 29 19:44:28.178: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-65df" in namespace "subpath-4288" to be "success or failure"
Apr 29 19:44:28.182: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Pending", Reason="", readiness=false. Elapsed: 3.557004ms
Apr 29 19:44:30.279: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100316193s
Apr 29 19:44:32.283: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Running", Reason="", readiness=true. Elapsed: 4.104619297s
Apr 29 19:44:34.287: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Running", Reason="", readiness=true. Elapsed: 6.109037675s
Apr 29 19:44:36.291: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Running", Reason="", readiness=true. Elapsed: 8.112973013s
Apr 29 19:44:38.295: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Running", Reason="", readiness=true. Elapsed: 10.1170718s
Apr 29 19:44:40.299: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Running", Reason="", readiness=true. Elapsed: 12.12099752s
Apr 29 19:44:42.304: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Running", Reason="", readiness=true. Elapsed: 14.125440245s
Apr 29 19:44:44.308: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Running", Reason="", readiness=true. Elapsed: 16.129870169s
Apr 29 19:44:46.313: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Running", Reason="", readiness=true. Elapsed: 18.134453837s
Apr 29 19:44:48.317: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Running", Reason="", readiness=true. Elapsed: 20.138703885s
Apr 29 19:44:50.322: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Running", Reason="", readiness=true. Elapsed: 22.143399397s
Apr 29 19:44:52.334: INFO: Pod "pod-subpath-test-downwardapi-65df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.155994559s
STEP: Saw pod success
Apr 29 19:44:52.334: INFO: Pod "pod-subpath-test-downwardapi-65df" satisfied condition "success or failure"
Apr 29 19:44:52.337: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-65df container test-container-subpath-downwardapi-65df: 
STEP: delete the pod
Apr 29 19:44:52.358: INFO: Waiting for pod pod-subpath-test-downwardapi-65df to disappear
Apr 29 19:44:52.368: INFO: Pod pod-subpath-test-downwardapi-65df no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-65df
Apr 29 19:44:52.368: INFO: Deleting pod "pod-subpath-test-downwardapi-65df" in namespace "subpath-4288"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:44:52.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4288" for this suite.
Apr 29 19:44:58.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:44:58.507: INFO: namespace subpath-4288 deletion completed in 6.132954013s

• [SLOW TEST:30.405 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:44:58.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Apr 29 19:45:06.642: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 29 19:45:06.649: INFO: Pod pod-with-poststart-http-hook still exists
Apr 29 19:45:08.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 29 19:45:08.653: INFO: Pod pod-with-poststart-http-hook still exists
Apr 29 19:45:10.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 29 19:45:10.652: INFO: Pod pod-with-poststart-http-hook still exists
Apr 29 19:45:12.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 29 19:45:12.653: INFO: Pod pod-with-poststart-http-hook still exists
Apr 29 19:45:14.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 29 19:45:14.653: INFO: Pod pod-with-poststart-http-hook still exists
Apr 29 19:45:16.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 29 19:45:16.653: INFO: Pod pod-with-poststart-http-hook still exists
Apr 29 19:45:18.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 29 19:45:18.653: INFO: Pod pod-with-poststart-http-hook still exists
Apr 29 19:45:20.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Apr 29 19:45:20.653: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:45:20.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-225" for this suite.
Apr 29 19:45:42.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:45:42.783: INFO: namespace container-lifecycle-hook-225 deletion completed in 22.125369916s

• [SLOW TEST:44.276 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:45:42.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-bz5h
STEP: Creating a pod to test atomic-volume-subpath
Apr 29 19:45:42.862: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bz5h" in namespace "subpath-788" to be "success or failure"
Apr 29 19:45:42.885: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Pending", Reason="", readiness=false. Elapsed: 23.122679ms
Apr 29 19:45:44.889: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027708284s
Apr 29 19:45:46.894: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Running", Reason="", readiness=true. Elapsed: 4.032041662s
Apr 29 19:45:48.898: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Running", Reason="", readiness=true. Elapsed: 6.036397263s
Apr 29 19:45:50.902: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Running", Reason="", readiness=true. Elapsed: 8.040258599s
Apr 29 19:45:52.906: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Running", Reason="", readiness=true. Elapsed: 10.044397186s
Apr 29 19:45:54.910: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Running", Reason="", readiness=true. Elapsed: 12.048440283s
Apr 29 19:45:56.914: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Running", Reason="", readiness=true. Elapsed: 14.052526443s
Apr 29 19:45:58.918: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Running", Reason="", readiness=true. Elapsed: 16.056482613s
Apr 29 19:46:00.922: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Running", Reason="", readiness=true. Elapsed: 18.060613459s
Apr 29 19:46:02.926: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Running", Reason="", readiness=true. Elapsed: 20.064066726s
Apr 29 19:46:04.930: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Running", Reason="", readiness=true. Elapsed: 22.068244694s
Apr 29 19:46:06.934: INFO: Pod "pod-subpath-test-configmap-bz5h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.072371118s
STEP: Saw pod success
Apr 29 19:46:06.934: INFO: Pod "pod-subpath-test-configmap-bz5h" satisfied condition "success or failure"
Apr 29 19:46:06.937: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-bz5h container test-container-subpath-configmap-bz5h: 
STEP: delete the pod
Apr 29 19:46:06.977: INFO: Waiting for pod pod-subpath-test-configmap-bz5h to disappear
Apr 29 19:46:06.991: INFO: Pod pod-subpath-test-configmap-bz5h no longer exists
STEP: Deleting pod pod-subpath-test-configmap-bz5h
Apr 29 19:46:06.991: INFO: Deleting pod "pod-subpath-test-configmap-bz5h" in namespace "subpath-788"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:46:06.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-788" for this suite.
Apr 29 19:46:13.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:46:13.092: INFO: namespace subpath-788 deletion completed in 6.095729093s

• [SLOW TEST:30.308 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:46:13.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Apr 29 19:46:13.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4156'
Apr 29 19:46:15.708: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Apr 29 19:46:15.709: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Apr 29 19:46:17.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4156'
Apr 29 19:46:18.107: INFO: stderr: ""
Apr 29 19:46:18.107: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:46:18.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4156" for this suite.
Apr 29 19:48:20.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:48:20.220: INFO: namespace kubectl-4156 deletion completed in 2m2.109804735s

• [SLOW TEST:127.128 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:48:20.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Apr 29 19:48:20.289: INFO: Waiting up to 5m0s for pod "downward-api-3bdb3458-1359-4d7f-a153-b8cdc9ce35e8" in namespace "downward-api-2689" to be "success or failure"
Apr 29 19:48:20.305: INFO: Pod "downward-api-3bdb3458-1359-4d7f-a153-b8cdc9ce35e8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.709129ms
Apr 29 19:48:22.309: INFO: Pod "downward-api-3bdb3458-1359-4d7f-a153-b8cdc9ce35e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019867876s
Apr 29 19:48:24.314: INFO: Pod "downward-api-3bdb3458-1359-4d7f-a153-b8cdc9ce35e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02404142s
STEP: Saw pod success
Apr 29 19:48:24.314: INFO: Pod "downward-api-3bdb3458-1359-4d7f-a153-b8cdc9ce35e8" satisfied condition "success or failure"
Apr 29 19:48:24.316: INFO: Trying to get logs from node iruya-worker2 pod downward-api-3bdb3458-1359-4d7f-a153-b8cdc9ce35e8 container dapi-container: 
STEP: delete the pod
Apr 29 19:48:24.518: INFO: Waiting for pod downward-api-3bdb3458-1359-4d7f-a153-b8cdc9ce35e8 to disappear
Apr 29 19:48:24.545: INFO: Pod downward-api-3bdb3458-1359-4d7f-a153-b8cdc9ce35e8 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:48:24.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2689" for this suite.
Apr 29 19:48:30.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:48:30.658: INFO: namespace downward-api-2689 deletion completed in 6.108474164s

• [SLOW TEST:10.437 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:48:30.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:48:36.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-85" for this suite.
Apr 29 19:48:42.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:48:43.029: INFO: namespace namespaces-85 deletion completed in 6.151792865s
STEP: Destroying namespace "nsdeletetest-7794" for this suite.
Apr 29 19:48:43.031: INFO: Namespace nsdeletetest-7794 was already deleted
STEP: Destroying namespace "nsdeletetest-1488" for this suite.
Apr 29 19:48:49.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:48:49.136: INFO: namespace nsdeletetest-1488 deletion completed in 6.105156868s

• [SLOW TEST:18.478 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:48:49.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Apr 29 19:48:49.295: INFO: Waiting up to 5m0s for pod "pod-0bcf18dd-fd24-458f-8ab9-175a707857d3" in namespace "emptydir-5421" to be "success or failure"
Apr 29 19:48:49.314: INFO: Pod "pod-0bcf18dd-fd24-458f-8ab9-175a707857d3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.750827ms
Apr 29 19:48:51.318: INFO: Pod "pod-0bcf18dd-fd24-458f-8ab9-175a707857d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022389954s
Apr 29 19:48:53.322: INFO: Pod "pod-0bcf18dd-fd24-458f-8ab9-175a707857d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026892302s
STEP: Saw pod success
Apr 29 19:48:53.322: INFO: Pod "pod-0bcf18dd-fd24-458f-8ab9-175a707857d3" satisfied condition "success or failure"
Apr 29 19:48:53.325: INFO: Trying to get logs from node iruya-worker2 pod pod-0bcf18dd-fd24-458f-8ab9-175a707857d3 container test-container: 
STEP: delete the pod
Apr 29 19:48:53.531: INFO: Waiting for pod pod-0bcf18dd-fd24-458f-8ab9-175a707857d3 to disappear
Apr 29 19:48:53.594: INFO: Pod pod-0bcf18dd-fd24-458f-8ab9-175a707857d3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:48:53.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5421" for this suite.
Apr 29 19:48:59.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:48:59.757: INFO: namespace emptydir-5421 deletion completed in 6.16036539s

• [SLOW TEST:10.621 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:48:59.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-9826
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9826
STEP: Deleting pre-stop pod
Apr 29 19:49:12.895: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:49:12.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9826" for this suite.
Apr 29 19:49:50.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:49:51.005: INFO: namespace prestop-9826 deletion completed in 38.099489085s

• [SLOW TEST:51.248 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:49:51.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-283
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 19:49:51.075: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Apr 29 19:50:17.194: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.85:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-283 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 19:50:17.194: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 19:50:17.326: INFO: Found all expected endpoints: [netserver-0]
Apr 29 19:50:17.329: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.252:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-283 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 19:50:17.329: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 19:50:17.471: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:50:17.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-283" for this suite.
Apr 29 19:50:41.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:50:41.618: INFO: namespace pod-network-test-283 deletion completed in 24.141986881s

• [SLOW TEST:50.612 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:50:41.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Apr 29 19:50:49.753: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 19:50:49.764: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 19:50:51.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 19:50:51.768: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 19:50:53.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 19:50:53.768: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 19:50:55.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 19:50:55.768: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 19:50:57.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 19:50:57.768: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 19:50:59.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 19:50:59.769: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 19:51:01.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 19:51:01.768: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 19:51:03.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 19:51:03.768: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 19:51:05.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 19:51:05.768: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 19:51:07.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 19:51:07.768: INFO: Pod pod-with-poststart-exec-hook still exists
Apr 29 19:51:09.764: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Apr 29 19:51:09.768: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:51:09.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8968" for this suite.
Apr 29 19:51:31.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:51:31.871: INFO: namespace container-lifecycle-hook-8968 deletion completed in 22.098186424s

• [SLOW TEST:50.252 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:51:31.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7016.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7016.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7016.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7016.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7016.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7016.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7016.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7016.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7016.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7016.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 163.174.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.174.163_udp@PTR;check="$$(dig +tcp +noall +answer +search 163.174.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.174.163_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7016.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7016.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7016.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7016.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7016.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7016.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7016.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7016.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7016.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7016.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7016.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 163.174.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.174.163_udp@PTR;check="$$(dig +tcp +noall +answer +search 163.174.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.174.163_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 19:51:38.031: INFO: Unable to read wheezy_udp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:38.034: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:38.037: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:38.040: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:38.057: INFO: Unable to read jessie_udp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:38.059: INFO: Unable to read jessie_tcp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:38.063: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:38.066: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:38.082: INFO: Lookups using dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5 failed for: [wheezy_udp@dns-test-service.dns-7016.svc.cluster.local wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local jessie_udp@dns-test-service.dns-7016.svc.cluster.local jessie_tcp@dns-test-service.dns-7016.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local]

Apr 29 19:51:43.087: INFO: Unable to read wheezy_udp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:43.091: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:43.095: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:43.098: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:43.120: INFO: Unable to read jessie_udp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:43.122: INFO: Unable to read jessie_tcp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:43.125: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:43.128: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:43.146: INFO: Lookups using dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5 failed for: [wheezy_udp@dns-test-service.dns-7016.svc.cluster.local wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local jessie_udp@dns-test-service.dns-7016.svc.cluster.local jessie_tcp@dns-test-service.dns-7016.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local]

Apr 29 19:51:48.086: INFO: Unable to read wheezy_udp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:48.089: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:48.092: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:48.095: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:48.113: INFO: Unable to read jessie_udp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:48.116: INFO: Unable to read jessie_tcp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:48.118: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:48.120: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:48.140: INFO: Lookups using dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5 failed for: [wheezy_udp@dns-test-service.dns-7016.svc.cluster.local wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local jessie_udp@dns-test-service.dns-7016.svc.cluster.local jessie_tcp@dns-test-service.dns-7016.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local]

Apr 29 19:51:53.088: INFO: Unable to read wheezy_udp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:53.092: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:53.096: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:53.099: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:53.117: INFO: Unable to read jessie_udp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:53.120: INFO: Unable to read jessie_tcp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:53.123: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:53.126: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:53.141: INFO: Lookups using dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5 failed for: [wheezy_udp@dns-test-service.dns-7016.svc.cluster.local wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local jessie_udp@dns-test-service.dns-7016.svc.cluster.local jessie_tcp@dns-test-service.dns-7016.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local]

Apr 29 19:51:58.087: INFO: Unable to read wheezy_udp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:58.091: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:58.094: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:58.097: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:58.119: INFO: Unable to read jessie_udp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:58.123: INFO: Unable to read jessie_tcp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:58.126: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:58.129: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:51:58.145: INFO: Lookups using dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5 failed for: [wheezy_udp@dns-test-service.dns-7016.svc.cluster.local wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local jessie_udp@dns-test-service.dns-7016.svc.cluster.local jessie_tcp@dns-test-service.dns-7016.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local]

Apr 29 19:52:03.086: INFO: Unable to read wheezy_udp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:52:03.089: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:52:03.092: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:52:03.095: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:52:03.112: INFO: Unable to read jessie_udp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:52:03.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:52:03.117: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:52:03.119: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local from pod dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5: the server could not find the requested resource (get pods dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5)
Apr 29 19:52:03.142: INFO: Lookups using dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5 failed for: [wheezy_udp@dns-test-service.dns-7016.svc.cluster.local wheezy_tcp@dns-test-service.dns-7016.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local jessie_udp@dns-test-service.dns-7016.svc.cluster.local jessie_tcp@dns-test-service.dns-7016.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7016.svc.cluster.local]

Apr 29 19:52:08.151: INFO: DNS probes using dns-7016/dns-test-d77df155-4cb4-4e02-9776-46e454ecd8c5 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:52:08.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7016" for this suite.
Apr 29 19:52:14.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:52:14.416: INFO: namespace dns-7016 deletion completed in 6.082110181s

• [SLOW TEST:42.545 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:52:14.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 29 19:52:14.522: INFO: Create a RollingUpdate DaemonSet
Apr 29 19:52:14.525: INFO: Check that daemon pods launch on every node of the cluster
Apr 29 19:52:14.528: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:14.543: INFO: Number of nodes with available pods: 0
Apr 29 19:52:14.543: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:52:15.548: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:15.573: INFO: Number of nodes with available pods: 0
Apr 29 19:52:15.573: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:52:16.548: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:16.552: INFO: Number of nodes with available pods: 0
Apr 29 19:52:16.552: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:52:17.552: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:17.556: INFO: Number of nodes with available pods: 0
Apr 29 19:52:17.556: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:52:18.547: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:18.549: INFO: Number of nodes with available pods: 0
Apr 29 19:52:18.549: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:52:19.731: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:19.869: INFO: Number of nodes with available pods: 0
Apr 29 19:52:19.869: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 19:52:20.549: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:20.552: INFO: Number of nodes with available pods: 1
Apr 29 19:52:20.552: INFO: Node iruya-worker2 is running more than one daemon pod
Apr 29 19:52:21.548: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:21.573: INFO: Number of nodes with available pods: 2
Apr 29 19:52:21.573: INFO: Number of running nodes: 2, number of available pods: 2
Apr 29 19:52:21.573: INFO: Update the DaemonSet to trigger a rollout
Apr 29 19:52:21.580: INFO: Updating DaemonSet daemon-set
Apr 29 19:52:29.598: INFO: Roll back the DaemonSet before rollout is complete
Apr 29 19:52:29.603: INFO: Updating DaemonSet daemon-set
Apr 29 19:52:29.603: INFO: Make sure DaemonSet rollback is complete
Apr 29 19:52:29.610: INFO: Wrong image for pod: daemon-set-kmdsp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Apr 29 19:52:29.610: INFO: Pod daemon-set-kmdsp is not available
Apr 29 19:52:29.632: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:30.638: INFO: Wrong image for pod: daemon-set-kmdsp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Apr 29 19:52:30.638: INFO: Pod daemon-set-kmdsp is not available
Apr 29 19:52:30.642: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:31.637: INFO: Wrong image for pod: daemon-set-kmdsp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Apr 29 19:52:31.637: INFO: Pod daemon-set-kmdsp is not available
Apr 29 19:52:31.641: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:32.636: INFO: Wrong image for pod: daemon-set-kmdsp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Apr 29 19:52:32.637: INFO: Pod daemon-set-kmdsp is not available
Apr 29 19:52:32.640: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:33.636: INFO: Wrong image for pod: daemon-set-kmdsp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Apr 29 19:52:33.636: INFO: Pod daemon-set-kmdsp is not available
Apr 29 19:52:33.639: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 19:52:34.636: INFO: Pod daemon-set-9d2nk is not available
Apr 29 19:52:34.639: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1710, will wait for the garbage collector to delete the pods
Apr 29 19:52:34.703: INFO: Deleting DaemonSet.extensions daemon-set took: 5.616441ms
Apr 29 19:52:35.003: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.224612ms
Apr 29 19:52:49.307: INFO: Number of nodes with available pods: 0
Apr 29 19:52:49.307: INFO: Number of running nodes: 0, number of available pods: 0
Apr 29 19:52:49.322: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1710/daemonsets","resourceVersion":"2889961"},"items":null}

Apr 29 19:52:49.323: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1710/pods","resourceVersion":"2889961"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:52:49.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1710" for this suite.
Apr 29 19:52:55.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:52:55.440: INFO: namespace daemonsets-1710 deletion completed in 6.103561381s

• [SLOW TEST:41.024 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:52:55.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Apr 29 19:52:55.504: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Apr 29 19:52:55.516: INFO: Waiting for terminating namespaces to be deleted...
Apr 29 19:52:55.518: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Apr 29 19:52:55.522: INFO: kindnet-7fbjm from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 29 19:52:55.522: INFO: 	Container kindnet-cni ready: true, restart count 0
Apr 29 19:52:55.522: INFO: chaos-daemon-kbww4 from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded)
Apr 29 19:52:55.522: INFO: 	Container chaos-daemon ready: true, restart count 0
Apr 29 19:52:55.522: INFO: kube-proxy-qp6db from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 29 19:52:55.522: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 29 19:52:55.522: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Apr 29 19:52:55.527: INFO: kindnet-nxsfn from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 29 19:52:55.527: INFO: 	Container kindnet-cni ready: true, restart count 0
Apr 29 19:52:55.527: INFO: kube-proxy-pz4cr from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 29 19:52:55.527: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 29 19:52:55.527: INFO: chaos-daemon-5nrq6 from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded)
Apr 29 19:52:55.527: INFO: 	Container chaos-daemon ready: true, restart count 0
Apr 29 19:52:55.527: INFO: chaos-controller-manager-6c68f56f79-plhrb from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded)
Apr 29 19:52:55.527: INFO: 	Container chaos-mesh ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-7f08eaa2-d225-41ef-b750-9a256d051cca 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-7f08eaa2-d225-41ef-b750-9a256d051cca off the node iruya-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-7f08eaa2-d225-41ef-b750-9a256d051cca
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:53:03.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9015" for this suite.
Apr 29 19:53:13.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:53:13.770: INFO: namespace sched-pred-9015 deletion completed in 10.104044093s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:18.330 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:53:13.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-cc11eb73-1e8d-4d00-903e-db2d00aa51d7 in namespace container-probe-3762
Apr 29 19:53:17.835: INFO: Started pod test-webserver-cc11eb73-1e8d-4d00-903e-db2d00aa51d7 in namespace container-probe-3762
STEP: checking the pod's current state and verifying that restartCount is present
Apr 29 19:53:17.879: INFO: Initial restart count of pod test-webserver-cc11eb73-1e8d-4d00-903e-db2d00aa51d7 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:57:18.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3762" for this suite.
Apr 29 19:57:24.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:57:24.689: INFO: namespace container-probe-3762 deletion completed in 6.165491947s

• [SLOW TEST:250.918 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:57:24.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Apr 29 19:57:24.737: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Apr 29 19:57:24.754: INFO: Waiting for terminating namespaces to be deleted...
Apr 29 19:57:24.756: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Apr 29 19:57:24.763: INFO: kindnet-7fbjm from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 29 19:57:24.763: INFO: 	Container kindnet-cni ready: true, restart count 0
Apr 29 19:57:24.763: INFO: chaos-daemon-kbww4 from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded)
Apr 29 19:57:24.763: INFO: 	Container chaos-daemon ready: true, restart count 0
Apr 29 19:57:24.763: INFO: kube-proxy-qp6db from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 29 19:57:24.763: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 29 19:57:24.763: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Apr 29 19:57:24.771: INFO: chaos-daemon-5nrq6 from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded)
Apr 29 19:57:24.771: INFO: 	Container chaos-daemon ready: true, restart count 0
Apr 29 19:57:24.771: INFO: kube-proxy-pz4cr from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 29 19:57:24.771: INFO: 	Container kube-proxy ready: true, restart count 0
Apr 29 19:57:24.771: INFO: chaos-controller-manager-6c68f56f79-plhrb from default started at 2021-04-13 15:47:46 +0000 UTC (1 container statuses recorded)
Apr 29 19:57:24.771: INFO: 	Container chaos-mesh ready: true, restart count 0
Apr 29 19:57:24.771: INFO: kindnet-nxsfn from kube-system started at 2021-04-13 08:09:02 +0000 UTC (1 container statuses recorded)
Apr 29 19:57:24.771: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.167a6c701e0a81cd], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:57:25.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1584" for this suite.
Apr 29 19:57:31.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:57:31.933: INFO: namespace sched-pred-1584 deletion completed in 6.131688527s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.243 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:57:31.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:57:35.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5599" for this suite.
Apr 29 19:57:42.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:57:42.252: INFO: namespace kubelet-test-5599 deletion completed in 6.249879783s

• [SLOW TEST:10.318 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:57:42.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:58:16.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3927" for this suite.
Apr 29 19:58:22.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:58:22.268: INFO: namespace container-runtime-3927 deletion completed in 6.129110979s

• [SLOW TEST:40.016 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:58:22.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Apr 29 19:58:22.327: INFO: Waiting up to 5m0s for pod "pod-e72844fd-b451-4d8b-bc27-e96213322909" in namespace "emptydir-9051" to be "success or failure"
Apr 29 19:58:22.331: INFO: Pod "pod-e72844fd-b451-4d8b-bc27-e96213322909": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019529ms
Apr 29 19:58:24.335: INFO: Pod "pod-e72844fd-b451-4d8b-bc27-e96213322909": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008247902s
Apr 29 19:58:26.339: INFO: Pod "pod-e72844fd-b451-4d8b-bc27-e96213322909": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012279176s
STEP: Saw pod success
Apr 29 19:58:26.339: INFO: Pod "pod-e72844fd-b451-4d8b-bc27-e96213322909" satisfied condition "success or failure"
Apr 29 19:58:26.342: INFO: Trying to get logs from node iruya-worker2 pod pod-e72844fd-b451-4d8b-bc27-e96213322909 container test-container: 
STEP: delete the pod
Apr 29 19:58:26.375: INFO: Waiting for pod pod-e72844fd-b451-4d8b-bc27-e96213322909 to disappear
Apr 29 19:58:26.387: INFO: Pod pod-e72844fd-b451-4d8b-bc27-e96213322909 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:58:26.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9051" for this suite.
Apr 29 19:58:32.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:58:32.493: INFO: namespace emptydir-9051 deletion completed in 6.101517526s

• [SLOW TEST:10.224 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:58:32.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Apr 29 19:58:37.633: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:58:37.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-41" for this suite.
Apr 29 19:58:43.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:58:43.773: INFO: namespace container-runtime-41 deletion completed in 6.10027117s

• [SLOW TEST:11.280 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:58:43.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 29 19:58:43.851: INFO: Creating deployment "test-recreate-deployment"
Apr 29 19:58:43.855: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Apr 29 19:58:43.886: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Apr 29 19:58:45.893: INFO: Waiting deployment "test-recreate-deployment" to complete
Apr 29 19:58:45.895: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755323123, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755323123, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755323123, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755323123, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 19:58:47.899: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Apr 29 19:58:47.906: INFO: Updating deployment test-recreate-deployment
Apr 29 19:58:47.906: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Apr 29 19:58:48.161: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9530,SelfLink:/apis/apps/v1/namespaces/deployment-9530/deployments/test-recreate-deployment,UID:cb916e2a-a9ae-49d4-b7c1-ffb45e182345,ResourceVersion:2890965,Generation:2,CreationTimestamp:2021-04-29 19:58:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2021-04-29 19:58:48 +0000 UTC 2021-04-29 19:58:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2021-04-29 19:58:48 +0000 UTC 2021-04-29 19:58:43 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Apr 29 19:58:48.164: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-9530,SelfLink:/apis/apps/v1/namespaces/deployment-9530/replicasets/test-recreate-deployment-5c8c9cc69d,UID:48a444e8-625d-4a84-8d5c-f92f7f1c1548,ResourceVersion:2890964,Generation:1,CreationTimestamp:2021-04-29 19:58:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment cb916e2a-a9ae-49d4-b7c1-ffb45e182345 0xc0039ee8c7 0xc0039ee8c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Apr 29 19:58:48.164: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Apr 29 19:58:48.165: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-9530,SelfLink:/apis/apps/v1/namespaces/deployment-9530/replicasets/test-recreate-deployment-6df85df6b9,UID:6447a58a-81ab-4d15-81b7-0c1119654a56,ResourceVersion:2890952,Generation:2,CreationTimestamp:2021-04-29 19:58:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment cb916e2a-a9ae-49d4-b7c1-ffb45e182345 0xc0039ee997 0xc0039ee998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Apr 29 19:58:48.168: INFO: Pod "test-recreate-deployment-5c8c9cc69d-qsnzd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-qsnzd,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-9530,SelfLink:/api/v1/namespaces/deployment-9530/pods/test-recreate-deployment-5c8c9cc69d-qsnzd,UID:1898e831-9a7d-4ee7-8bdb-4f4c8c03871a,ResourceVersion:2890963,Generation:0,CreationTimestamp:2021-04-29 19:58:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 48a444e8-625d-4a84-8d5c-f92f7f1c1548 0xc0039ef267 0xc0039ef268}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwsx6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwsx6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-pwsx6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0039ef2e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0039ef300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:48 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-04-29 19:58:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:58:48.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9530" for this suite.
Apr 29 19:58:54.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:58:54.269: INFO: namespace deployment-9530 deletion completed in 6.097668358s

• [SLOW TEST:10.496 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:58:54.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7267
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-7267
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7267
Apr 29 19:58:54.587: INFO: Found 0 stateful pods, waiting for 1
Apr 29 19:59:04.591: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Apr 29 19:59:04.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7267 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 29 19:59:07.495: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 29 19:59:07.495: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 29 19:59:07.495: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 29 19:59:07.499: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Apr 29 19:59:17.503: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Apr 29 19:59:17.503: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 19:59:17.518: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 29 19:59:17.518: INFO: ss-0  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  }]
Apr 29 19:59:17.518: INFO: 
Apr 29 19:59:17.518: INFO: StatefulSet ss has not reached scale 3, at 1
Apr 29 19:59:18.523: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992792247s
Apr 29 19:59:19.953: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987846548s
Apr 29 19:59:21.060: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.557057132s
Apr 29 19:59:22.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.450596045s
Apr 29 19:59:23.835: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.679463883s
Apr 29 19:59:24.840: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.675861388s
Apr 29 19:59:25.845: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.670788352s
Apr 29 19:59:26.849: INFO: Verifying statefulset ss doesn't scale past 3 for another 665.657428ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7267
Apr 29 19:59:27.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7267 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 29 19:59:28.118: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Apr 29 19:59:28.118: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 29 19:59:28.118: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 29 19:59:28.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7267 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 29 19:59:28.400: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Apr 29 19:59:28.400: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 29 19:59:28.400: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 29 19:59:28.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7267 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 29 19:59:28.653: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Apr 29 19:59:28.653: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 29 19:59:28.653: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 29 19:59:28.657: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 19:59:28.657: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 19:59:28.657: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Apr 29 19:59:28.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7267 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 29 19:59:28.878: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 29 19:59:28.878: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 29 19:59:28.878: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 29 19:59:28.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7267 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 29 19:59:29.129: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 29 19:59:29.129: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 29 19:59:29.129: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 29 19:59:29.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7267 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 29 19:59:29.473: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 29 19:59:29.473: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 29 19:59:29.473: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 29 19:59:29.473: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 19:59:29.477: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Apr 29 19:59:39.484: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Apr 29 19:59:39.484: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Apr 29 19:59:39.484: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Apr 29 19:59:39.509: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 29 19:59:39.509: INFO: ss-0  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  }]
Apr 29 19:59:39.509: INFO: ss-1  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:39.509: INFO: ss-2  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:39.509: INFO: 
Apr 29 19:59:39.509: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 19:59:40.514: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 29 19:59:40.514: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  }]
Apr 29 19:59:40.514: INFO: ss-1  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:40.514: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:40.514: INFO: 
Apr 29 19:59:40.514: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 19:59:41.653: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 29 19:59:41.653: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  }]
Apr 29 19:59:41.653: INFO: ss-1  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:41.653: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:41.653: INFO: 
Apr 29 19:59:41.653: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 19:59:42.659: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 29 19:59:42.659: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  }]
Apr 29 19:59:42.659: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:42.659: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:42.659: INFO: 
Apr 29 19:59:42.659: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 19:59:43.666: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 29 19:59:43.667: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  }]
Apr 29 19:59:43.667: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:43.667: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:43.667: INFO: 
Apr 29 19:59:43.667: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 19:59:44.672: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 29 19:59:44.672: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  }]
Apr 29 19:59:44.672: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:44.672: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:44.672: INFO: 
Apr 29 19:59:44.672: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 19:59:45.677: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 29 19:59:45.677: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  }]
Apr 29 19:59:45.677: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:45.678: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:45.678: INFO: 
Apr 29 19:59:45.678: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 19:59:46.682: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 29 19:59:46.682: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  }]
Apr 29 19:59:46.682: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:46.682: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:46.682: INFO: 
Apr 29 19:59:46.682: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 19:59:47.694: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 29 19:59:47.694: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  }]
Apr 29 19:59:47.694: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:47.694: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:47.694: INFO: 
Apr 29 19:59:47.694: INFO: StatefulSet ss has not reached scale 0, at 3
Apr 29 19:59:48.699: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Apr 29 19:59:48.699: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:58:54 +0000 UTC  }]
Apr 29 19:59:48.700: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:48.700: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 19:59:17 +0000 UTC  }]
Apr 29 19:59:48.700: INFO: 
Apr 29 19:59:48.700: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7267
Apr 29 19:59:49.704: INFO: Scaling statefulset ss to 0
Apr 29 19:59:49.715: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Apr 29 19:59:49.717: INFO: Deleting all statefulset in ns statefulset-7267
Apr 29 19:59:49.720: INFO: Scaling statefulset ss to 0
Apr 29 19:59:49.729: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 19:59:49.732: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 19:59:49.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7267" for this suite.
Apr 29 19:59:55.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 19:59:55.857: INFO: namespace statefulset-7267 deletion completed in 6.106184958s

• [SLOW TEST:61.588 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 19:59:55.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Apr 29 19:59:55.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3912'
Apr 29 19:59:56.036: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Apr 29 19:59:56.036: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Apr 29 19:59:56.049: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Apr 29 19:59:56.058: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Apr 29 19:59:56.128: INFO: scanned /root for discovery docs: 
Apr 29 19:59:56.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3912'
Apr 29 20:00:11.959: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Apr 29 20:00:11.959: INFO: stdout: "Created e2e-test-nginx-rc-a514d8a558be72d4727d6acc1f10648f\nScaling up e2e-test-nginx-rc-a514d8a558be72d4727d6acc1f10648f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a514d8a558be72d4727d6acc1f10648f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a514d8a558be72d4727d6acc1f10648f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Apr 29 20:00:11.959: INFO: stdout: "Created e2e-test-nginx-rc-a514d8a558be72d4727d6acc1f10648f\nScaling up e2e-test-nginx-rc-a514d8a558be72d4727d6acc1f10648f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a514d8a558be72d4727d6acc1f10648f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a514d8a558be72d4727d6acc1f10648f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Apr 29 20:00:11.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3912'
Apr 29 20:00:12.065: INFO: stderr: ""
Apr 29 20:00:12.065: INFO: stdout: "e2e-test-nginx-rc-a514d8a558be72d4727d6acc1f10648f-zvxwl "
Apr 29 20:00:12.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a514d8a558be72d4727d6acc1f10648f-zvxwl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3912'
Apr 29 20:00:12.159: INFO: stderr: ""
Apr 29 20:00:12.159: INFO: stdout: "true"
Apr 29 20:00:12.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a514d8a558be72d4727d6acc1f10648f-zvxwl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3912'
Apr 29 20:00:12.252: INFO: stderr: ""
Apr 29 20:00:12.252: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Apr 29 20:00:12.252: INFO: e2e-test-nginx-rc-a514d8a558be72d4727d6acc1f10648f-zvxwl is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Apr 29 20:00:12.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3912'
Apr 29 20:00:12.364: INFO: stderr: ""
Apr 29 20:00:12.364: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:00:12.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3912" for this suite.
Apr 29 20:00:34.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:00:34.508: INFO: namespace kubectl-3912 deletion completed in 22.110185916s

• [SLOW TEST:38.650 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:00:34.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Apr 29 20:00:34.618: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8908,SelfLink:/api/v1/namespaces/watch-8908/configmaps/e2e-watch-test-label-changed,UID:cc0f6302-dc32-4e21-a3aa-3a83ed1ed061,ResourceVersion:2891449,Generation:0,CreationTimestamp:2021-04-29 20:00:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Apr 29 20:00:34.618: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8908,SelfLink:/api/v1/namespaces/watch-8908/configmaps/e2e-watch-test-label-changed,UID:cc0f6302-dc32-4e21-a3aa-3a83ed1ed061,ResourceVersion:2891450,Generation:0,CreationTimestamp:2021-04-29 20:00:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Apr 29 20:00:34.618: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8908,SelfLink:/api/v1/namespaces/watch-8908/configmaps/e2e-watch-test-label-changed,UID:cc0f6302-dc32-4e21-a3aa-3a83ed1ed061,ResourceVersion:2891451,Generation:0,CreationTimestamp:2021-04-29 20:00:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Apr 29 20:00:44.678: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8908,SelfLink:/api/v1/namespaces/watch-8908/configmaps/e2e-watch-test-label-changed,UID:cc0f6302-dc32-4e21-a3aa-3a83ed1ed061,ResourceVersion:2891472,Generation:0,CreationTimestamp:2021-04-29 20:00:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Apr 29 20:00:44.678: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8908,SelfLink:/api/v1/namespaces/watch-8908/configmaps/e2e-watch-test-label-changed,UID:cc0f6302-dc32-4e21-a3aa-3a83ed1ed061,ResourceVersion:2891473,Generation:0,CreationTimestamp:2021-04-29 20:00:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Apr 29 20:00:44.678: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8908,SelfLink:/api/v1/namespaces/watch-8908/configmaps/e2e-watch-test-label-changed,UID:cc0f6302-dc32-4e21-a3aa-3a83ed1ed061,ResourceVersion:2891474,Generation:0,CreationTimestamp:2021-04-29 20:00:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:00:44.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8908" for this suite.
Apr 29 20:00:50.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:00:50.789: INFO: namespace watch-8908 deletion completed in 6.106622912s

• [SLOW TEST:16.281 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:00:50.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-d144b055-c0fd-46ca-96de-1b25542fc456
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:00:56.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-442" for this suite.
Apr 29 20:01:18.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:01:19.038: INFO: namespace configmap-442 deletion completed in 22.121557304s

• [SLOW TEST:28.249 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:01:19.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-61e017e7-951d-4136-aa49-9aa674b4c684
STEP: Creating a pod to test consume secrets
Apr 29 20:01:19.127: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-95755987-f34e-45d7-8d3a-f2b67fcaed32" in namespace "projected-2814" to be "success or failure"
Apr 29 20:01:19.182: INFO: Pod "pod-projected-secrets-95755987-f34e-45d7-8d3a-f2b67fcaed32": Phase="Pending", Reason="", readiness=false. Elapsed: 54.272389ms
Apr 29 20:01:21.186: INFO: Pod "pod-projected-secrets-95755987-f34e-45d7-8d3a-f2b67fcaed32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058329356s
Apr 29 20:01:23.190: INFO: Pod "pod-projected-secrets-95755987-f34e-45d7-8d3a-f2b67fcaed32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06244441s
STEP: Saw pod success
Apr 29 20:01:23.190: INFO: Pod "pod-projected-secrets-95755987-f34e-45d7-8d3a-f2b67fcaed32" satisfied condition "success or failure"
Apr 29 20:01:23.193: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-95755987-f34e-45d7-8d3a-f2b67fcaed32 container projected-secret-volume-test: 
STEP: delete the pod
Apr 29 20:01:23.233: INFO: Waiting for pod pod-projected-secrets-95755987-f34e-45d7-8d3a-f2b67fcaed32 to disappear
Apr 29 20:01:23.246: INFO: Pod pod-projected-secrets-95755987-f34e-45d7-8d3a-f2b67fcaed32 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:01:23.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2814" for this suite.
Apr 29 20:01:29.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:01:29.352: INFO: namespace projected-2814 deletion completed in 6.10223321s

• [SLOW TEST:10.313 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:01:29.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Apr 29 20:01:29.386: INFO: PodSpec: initContainers in spec.initContainers
Apr 29 20:02:24.338: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-94a8eaed-9637-4291-a685-4e7c2e91a2c6", GenerateName:"", Namespace:"init-container-520", SelfLink:"/api/v1/namespaces/init-container-520/pods/pod-init-94a8eaed-9637-4291-a685-4e7c2e91a2c6", UID:"4e84ac73-f30e-48dd-be99-ef7d37f87456", ResourceVersion:"2891761", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63755323289, loc:(*time.Location)(0x7edea20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"386001113"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-b9ph8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0026aa300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b9ph8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b9ph8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b9ph8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0037a64a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0026a4180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037a6530)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037a6550)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0037a6558), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0037a655c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755323289, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755323289, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755323289, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755323289, loc:(*time.Location)(0x7edea20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.3", PodIP:"10.244.1.13", StartTime:(*v1.Time)(0xc0012023e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a0dc00)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a0dc70)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://ae4987f74590162d3e3c730a97292655eabec621280f2e17f03bf6c22b5a8348"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001202420), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001202400), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:02:24.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-520" for this suite.
Apr 29 20:02:46.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:02:46.561: INFO: namespace init-container-520 deletion completed in 22.16410814s

• [SLOW TEST:77.208 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:02:46.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-dacb167c-6994-496d-be0c-15cc4f163f59
STEP: Creating a pod to test consume secrets
Apr 29 20:02:46.691: INFO: Waiting up to 5m0s for pod "pod-secrets-e8ec17c5-68d6-4c53-b9b5-6fc0ba3b881e" in namespace "secrets-2144" to be "success or failure"
Apr 29 20:02:46.697: INFO: Pod "pod-secrets-e8ec17c5-68d6-4c53-b9b5-6fc0ba3b881e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.330544ms
Apr 29 20:02:48.809: INFO: Pod "pod-secrets-e8ec17c5-68d6-4c53-b9b5-6fc0ba3b881e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117792051s
Apr 29 20:02:50.813: INFO: Pod "pod-secrets-e8ec17c5-68d6-4c53-b9b5-6fc0ba3b881e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121970334s
STEP: Saw pod success
Apr 29 20:02:50.813: INFO: Pod "pod-secrets-e8ec17c5-68d6-4c53-b9b5-6fc0ba3b881e" satisfied condition "success or failure"
Apr 29 20:02:50.816: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-e8ec17c5-68d6-4c53-b9b5-6fc0ba3b881e container secret-volume-test: 
STEP: delete the pod
Apr 29 20:02:50.860: INFO: Waiting for pod pod-secrets-e8ec17c5-68d6-4c53-b9b5-6fc0ba3b881e to disappear
Apr 29 20:02:50.864: INFO: Pod pod-secrets-e8ec17c5-68d6-4c53-b9b5-6fc0ba3b881e no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:02:50.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2144" for this suite.
Apr 29 20:02:56.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:02:57.013: INFO: namespace secrets-2144 deletion completed in 6.145780018s

• [SLOW TEST:10.453 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:02:57.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-8b191f84-cf50-48b9-b922-68854e53576d
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:02:57.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2578" for this suite.
Apr 29 20:03:03.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:03:03.167: INFO: namespace configmap-2578 deletion completed in 6.106310754s

• [SLOW TEST:6.153 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:03:03.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 29 20:03:03.206: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Apr 29 20:03:03.236: INFO: Pod name sample-pod: Found 0 pods out of 1
Apr 29 20:03:08.241: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Apr 29 20:03:08.241: INFO: Creating deployment "test-rolling-update-deployment"
Apr 29 20:03:08.246: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Apr 29 20:03:08.257: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Apr 29 20:03:10.264: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Apr 29 20:03:10.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755323388, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755323388, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63755323388, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63755323388, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Apr 29 20:03:12.270: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Apr 29 20:03:12.279: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-4293,SelfLink:/apis/apps/v1/namespaces/deployment-4293/deployments/test-rolling-update-deployment,UID:bc67e877-8cad-4e3f-b026-aece328a545d,ResourceVersion:2891950,Generation:1,CreationTimestamp:2021-04-29 20:03:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2021-04-29 20:03:08 +0000 UTC 2021-04-29 20:03:08 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-04-29 20:03:11 +0000 UTC 2021-04-29 20:03:08 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Apr 29 20:03:12.282: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-4293,SelfLink:/apis/apps/v1/namespaces/deployment-4293/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:a8f5d87b-fe98-4ffa-b848-1ebfa5f6f551,ResourceVersion:2891939,Generation:1,CreationTimestamp:2021-04-29 20:03:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bc67e877-8cad-4e3f-b026-aece328a545d 0xc002c82247 0xc002c82248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Apr 29 20:03:12.282: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Apr 29 20:03:12.282: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-4293,SelfLink:/apis/apps/v1/namespaces/deployment-4293/replicasets/test-rolling-update-controller,UID:9501e4ac-5bd6-4266-af66-65e5939bd147,ResourceVersion:2891948,Generation:2,CreationTimestamp:2021-04-29 20:03:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment bc67e877-8cad-4e3f-b026-aece328a545d 0xc002c82177 0xc002c82178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Apr 29 20:03:12.286: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-f8pjl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-f8pjl,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-4293,SelfLink:/api/v1/namespaces/deployment-4293/pods/test-rolling-update-deployment-79f6b9d75c-f8pjl,UID:1585b9d5-626d-4d16-8ff8-3346b900f395,ResourceVersion:2891938,Generation:0,CreationTimestamp:2021-04-29 20:03:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c a8f5d87b-fe98-4ffa-b848-1ebfa5f6f551 0xc00370a2a7 0xc00370a2a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7w6s4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7w6s4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-7w6s4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00370a320} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00370a340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:03:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:03:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:03:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:03:08 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.103,StartTime:2021-04-29 20:03:08 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2021-04-29 20:03:11 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://957f7b1542d37401d78b0bbe2ffdcf474ca44cedbff2c692a91348d8eef7c0a1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:03:12.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4293" for this suite.
Apr 29 20:03:18.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:03:18.390: INFO: namespace deployment-4293 deletion completed in 6.100796371s

• [SLOW TEST:15.224 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:03:18.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-4989/configmap-test-f0a2632a-9d92-464b-8f5a-c36cba5eebbc
STEP: Creating a pod to test consume configMaps
Apr 29 20:03:18.640: INFO: Waiting up to 5m0s for pod "pod-configmaps-5992117b-32d8-4847-89e8-550c7d026d48" in namespace "configmap-4989" to be "success or failure"
Apr 29 20:03:18.643: INFO: Pod "pod-configmaps-5992117b-32d8-4847-89e8-550c7d026d48": Phase="Pending", Reason="", readiness=false. Elapsed: 3.361992ms
Apr 29 20:03:20.679: INFO: Pod "pod-configmaps-5992117b-32d8-4847-89e8-550c7d026d48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039803276s
Apr 29 20:03:22.683: INFO: Pod "pod-configmaps-5992117b-32d8-4847-89e8-550c7d026d48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04358183s
STEP: Saw pod success
Apr 29 20:03:22.683: INFO: Pod "pod-configmaps-5992117b-32d8-4847-89e8-550c7d026d48" satisfied condition "success or failure"
Apr 29 20:03:22.686: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-5992117b-32d8-4847-89e8-550c7d026d48 container env-test: 
STEP: delete the pod
Apr 29 20:03:22.886: INFO: Waiting for pod pod-configmaps-5992117b-32d8-4847-89e8-550c7d026d48 to disappear
Apr 29 20:03:22.895: INFO: Pod pod-configmaps-5992117b-32d8-4847-89e8-550c7d026d48 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:03:22.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4989" for this suite.
Apr 29 20:03:28.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:03:29.005: INFO: namespace configmap-4989 deletion completed in 6.10589306s

• [SLOW TEST:10.614 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:03:29.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Apr 29 20:03:29.089: INFO: Waiting up to 5m0s for pod "downward-api-5ece5ed5-0d2b-42dd-814e-5cfa19d4001c" in namespace "downward-api-45" to be "success or failure"
Apr 29 20:03:29.094: INFO: Pod "downward-api-5ece5ed5-0d2b-42dd-814e-5cfa19d4001c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.359851ms
Apr 29 20:03:31.128: INFO: Pod "downward-api-5ece5ed5-0d2b-42dd-814e-5cfa19d4001c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038490663s
Apr 29 20:03:33.177: INFO: Pod "downward-api-5ece5ed5-0d2b-42dd-814e-5cfa19d4001c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087448112s
STEP: Saw pod success
Apr 29 20:03:33.177: INFO: Pod "downward-api-5ece5ed5-0d2b-42dd-814e-5cfa19d4001c" satisfied condition "success or failure"
Apr 29 20:03:33.180: INFO: Trying to get logs from node iruya-worker2 pod downward-api-5ece5ed5-0d2b-42dd-814e-5cfa19d4001c container dapi-container: 
STEP: delete the pod
Apr 29 20:03:33.235: INFO: Waiting for pod downward-api-5ece5ed5-0d2b-42dd-814e-5cfa19d4001c to disappear
Apr 29 20:03:33.249: INFO: Pod downward-api-5ece5ed5-0d2b-42dd-814e-5cfa19d4001c no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:03:33.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-45" for this suite.
Apr 29 20:03:39.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:03:39.362: INFO: namespace downward-api-45 deletion completed in 6.109955055s

• [SLOW TEST:10.357 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:03:39.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Apr 29 20:03:39.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-482'
Apr 29 20:03:39.713: INFO: stderr: ""
Apr 29 20:03:39.713: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 29 20:03:39.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-482'
Apr 29 20:03:39.809: INFO: stderr: ""
Apr 29 20:03:39.809: INFO: stdout: "update-demo-nautilus-jg4b6 update-demo-nautilus-kjxhp "
Apr 29 20:03:39.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jg4b6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-482'
Apr 29 20:03:39.912: INFO: stderr: ""
Apr 29 20:03:39.912: INFO: stdout: ""
Apr 29 20:03:39.912: INFO: update-demo-nautilus-jg4b6 is created but not running
Apr 29 20:03:44.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-482'
Apr 29 20:03:45.009: INFO: stderr: ""
Apr 29 20:03:45.009: INFO: stdout: "update-demo-nautilus-jg4b6 update-demo-nautilus-kjxhp "
Apr 29 20:03:45.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jg4b6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-482'
Apr 29 20:03:45.268: INFO: stderr: ""
Apr 29 20:03:45.268: INFO: stdout: "true"
Apr 29 20:03:45.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jg4b6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-482'
Apr 29 20:03:45.361: INFO: stderr: ""
Apr 29 20:03:45.361: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 20:03:45.361: INFO: validating pod update-demo-nautilus-jg4b6
Apr 29 20:03:45.423: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 20:03:45.423: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 20:03:45.423: INFO: update-demo-nautilus-jg4b6 is verified up and running
Apr 29 20:03:45.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjxhp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-482'
Apr 29 20:03:45.518: INFO: stderr: ""
Apr 29 20:03:45.519: INFO: stdout: "true"
Apr 29 20:03:45.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kjxhp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-482'
Apr 29 20:03:45.611: INFO: stderr: ""
Apr 29 20:03:45.611: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 20:03:45.611: INFO: validating pod update-demo-nautilus-kjxhp
Apr 29 20:03:45.615: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 20:03:45.615: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 20:03:45.615: INFO: update-demo-nautilus-kjxhp is verified up and running
STEP: rolling-update to new replication controller
Apr 29 20:03:45.617: INFO: scanned /root for discovery docs: 
Apr 29 20:03:45.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-482'
Apr 29 20:04:08.177: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Apr 29 20:04:08.177: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 29 20:04:08.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-482'
Apr 29 20:04:08.277: INFO: stderr: ""
Apr 29 20:04:08.277: INFO: stdout: "update-demo-kitten-gzjzw update-demo-kitten-qtn89 "
Apr 29 20:04:08.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gzjzw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-482'
Apr 29 20:04:08.366: INFO: stderr: ""
Apr 29 20:04:08.366: INFO: stdout: "true"
Apr 29 20:04:08.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gzjzw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-482'
Apr 29 20:04:08.465: INFO: stderr: ""
Apr 29 20:04:08.465: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Apr 29 20:04:08.465: INFO: validating pod update-demo-kitten-gzjzw
Apr 29 20:04:08.468: INFO: got data: {
  "image": "kitten.jpg"
}

Apr 29 20:04:08.468: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Apr 29 20:04:08.468: INFO: update-demo-kitten-gzjzw is verified up and running
Apr 29 20:04:08.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qtn89 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-482'
Apr 29 20:04:08.556: INFO: stderr: ""
Apr 29 20:04:08.556: INFO: stdout: "true"
Apr 29 20:04:08.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qtn89 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-482'
Apr 29 20:04:08.643: INFO: stderr: ""
Apr 29 20:04:08.643: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Apr 29 20:04:08.643: INFO: validating pod update-demo-kitten-qtn89
Apr 29 20:04:08.655: INFO: got data: {
  "image": "kitten.jpg"
}

Apr 29 20:04:08.655: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Apr 29 20:04:08.655: INFO: update-demo-kitten-qtn89 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:04:08.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-482" for this suite.
Apr 29 20:04:32.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:04:32.768: INFO: namespace kubectl-482 deletion completed in 24.109763554s

• [SLOW TEST:53.405 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:04:32.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0429 20:05:03.386050       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Apr 29 20:05:03.386: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:05:03.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7474" for this suite.
Apr 29 20:05:09.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:05:09.482: INFO: namespace gc-7474 deletion completed in 6.092758319s

• [SLOW TEST:36.714 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:05:09.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-41d3278f-afec-4135-a386-e29cda3d5b63
STEP: Creating a pod to test consume configMaps
Apr 29 20:05:09.719: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4b4a4b4-3153-4a64-9214-662f764a89ff" in namespace "configmap-1219" to be "success or failure"
Apr 29 20:05:09.741: INFO: Pod "pod-configmaps-f4b4a4b4-3153-4a64-9214-662f764a89ff": Phase="Pending", Reason="", readiness=false. Elapsed: 22.255217ms
Apr 29 20:05:11.849: INFO: Pod "pod-configmaps-f4b4a4b4-3153-4a64-9214-662f764a89ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129969341s
Apr 29 20:05:13.853: INFO: Pod "pod-configmaps-f4b4a4b4-3153-4a64-9214-662f764a89ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1339763s
STEP: Saw pod success
Apr 29 20:05:13.853: INFO: Pod "pod-configmaps-f4b4a4b4-3153-4a64-9214-662f764a89ff" satisfied condition "success or failure"
Apr 29 20:05:13.856: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-f4b4a4b4-3153-4a64-9214-662f764a89ff container configmap-volume-test: 
STEP: delete the pod
Apr 29 20:05:13.874: INFO: Waiting for pod pod-configmaps-f4b4a4b4-3153-4a64-9214-662f764a89ff to disappear
Apr 29 20:05:13.885: INFO: Pod pod-configmaps-f4b4a4b4-3153-4a64-9214-662f764a89ff no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:05:13.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1219" for this suite.
Apr 29 20:05:20.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:05:20.117: INFO: namespace configmap-1219 deletion completed in 6.228786855s

• [SLOW TEST:10.634 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:05:20.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Apr 29 20:05:20.171: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:05:27.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1981" for this suite.
Apr 29 20:05:52.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:05:52.186: INFO: namespace init-container-1981 deletion completed in 24.171821653s

• [SLOW TEST:32.068 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:05:52.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 29 20:05:52.251: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Apr 29 20:05:58.499: INFO: Waiting up to 5m0s for pod "client-containers-5f95c45b-0a70-4d3b-b710-16ef0d2d29b1" in namespace "containers-7713" to be "success or failure"
Apr 29 20:05:58.502: INFO: Pod "client-containers-5f95c45b-0a70-4d3b-b710-16ef0d2d29b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.531134ms
Apr 29 20:06:00.505: INFO: Pod "client-containers-5f95c45b-0a70-4d3b-b710-16ef0d2d29b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006823649s
Apr 29 20:06:02.509: INFO: Pod "client-containers-5f95c45b-0a70-4d3b-b710-16ef0d2d29b1": Phase="Running", Reason="", readiness=true. Elapsed: 4.010539832s
Apr 29 20:06:04.513: INFO: Pod "client-containers-5f95c45b-0a70-4d3b-b710-16ef0d2d29b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014181706s
STEP: Saw pod success
Apr 29 20:06:04.513: INFO: Pod "client-containers-5f95c45b-0a70-4d3b-b710-16ef0d2d29b1" satisfied condition "success or failure"
Apr 29 20:06:04.515: INFO: Trying to get logs from node iruya-worker pod client-containers-5f95c45b-0a70-4d3b-b710-16ef0d2d29b1 container test-container: 
STEP: delete the pod
Apr 29 20:06:04.555: INFO: Waiting for pod client-containers-5f95c45b-0a70-4d3b-b710-16ef0d2d29b1 to disappear
Apr 29 20:06:04.592: INFO: Pod client-containers-5f95c45b-0a70-4d3b-b710-16ef0d2d29b1 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:06:04.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7713" for this suite.
Apr 29 20:06:10.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:06:10.699: INFO: namespace containers-7713 deletion completed in 6.103822646s

• [SLOW TEST:12.262 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:06:10.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-b6w9
STEP: Creating a pod to test atomic-volume-subpath
Apr 29 20:06:10.794: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-b6w9" in namespace "subpath-4306" to be "success or failure"
Apr 29 20:06:10.798: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.581253ms
Apr 29 20:06:12.801: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006984373s
Apr 29 20:06:14.806: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Running", Reason="", readiness=true. Elapsed: 4.011715046s
Apr 29 20:06:16.810: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Running", Reason="", readiness=true. Elapsed: 6.016023922s
Apr 29 20:06:18.814: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Running", Reason="", readiness=true. Elapsed: 8.019850941s
Apr 29 20:06:20.818: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Running", Reason="", readiness=true. Elapsed: 10.024467594s
Apr 29 20:06:22.823: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Running", Reason="", readiness=true. Elapsed: 12.028695852s
Apr 29 20:06:24.827: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Running", Reason="", readiness=true. Elapsed: 14.032837493s
Apr 29 20:06:26.831: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Running", Reason="", readiness=true. Elapsed: 16.036970118s
Apr 29 20:06:28.835: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Running", Reason="", readiness=true. Elapsed: 18.041338986s
Apr 29 20:06:30.839: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Running", Reason="", readiness=true. Elapsed: 20.045067879s
Apr 29 20:06:32.843: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Running", Reason="", readiness=true. Elapsed: 22.04926107s
Apr 29 20:06:34.847: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Running", Reason="", readiness=true. Elapsed: 24.053431394s
Apr 29 20:06:36.851: INFO: Pod "pod-subpath-test-projected-b6w9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.057464716s
STEP: Saw pod success
Apr 29 20:06:36.852: INFO: Pod "pod-subpath-test-projected-b6w9" satisfied condition "success or failure"
Apr 29 20:06:36.855: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-b6w9 container test-container-subpath-projected-b6w9: 
STEP: delete the pod
Apr 29 20:06:36.884: INFO: Waiting for pod pod-subpath-test-projected-b6w9 to disappear
Apr 29 20:06:36.894: INFO: Pod pod-subpath-test-projected-b6w9 no longer exists
STEP: Deleting pod pod-subpath-test-projected-b6w9
Apr 29 20:06:36.894: INFO: Deleting pod "pod-subpath-test-projected-b6w9" in namespace "subpath-4306"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:06:36.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4306" for this suite.
Apr 29 20:06:42.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:06:43.028: INFO: namespace subpath-4306 deletion completed in 6.111551723s

• [SLOW TEST:32.329 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:06:43.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 29 20:06:43.114: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b2c0534-6d3d-4a20-af93-6e512b73f205" in namespace "downward-api-7338" to be "success or failure"
Apr 29 20:06:43.117: INFO: Pod "downwardapi-volume-8b2c0534-6d3d-4a20-af93-6e512b73f205": Phase="Pending", Reason="", readiness=false. Elapsed: 2.998677ms
Apr 29 20:06:45.121: INFO: Pod "downwardapi-volume-8b2c0534-6d3d-4a20-af93-6e512b73f205": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006744275s
Apr 29 20:06:47.126: INFO: Pod "downwardapi-volume-8b2c0534-6d3d-4a20-af93-6e512b73f205": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011825361s
STEP: Saw pod success
Apr 29 20:06:47.126: INFO: Pod "downwardapi-volume-8b2c0534-6d3d-4a20-af93-6e512b73f205" satisfied condition "success or failure"
Apr 29 20:06:47.129: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8b2c0534-6d3d-4a20-af93-6e512b73f205 container client-container: 
STEP: delete the pod
Apr 29 20:06:47.201: INFO: Waiting for pod downwardapi-volume-8b2c0534-6d3d-4a20-af93-6e512b73f205 to disappear
Apr 29 20:06:47.305: INFO: Pod downwardapi-volume-8b2c0534-6d3d-4a20-af93-6e512b73f205 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:06:47.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7338" for this suite.
Apr 29 20:06:53.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:06:53.419: INFO: namespace downward-api-7338 deletion completed in 6.109810433s

• [SLOW TEST:10.391 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:06:53.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Apr 29 20:06:53.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5321 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Apr 29 20:06:56.276: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Apr 29 20:06:56.276: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:06:58.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5321" for this suite.
Apr 29 20:07:04.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:07:04.394: INFO: namespace kubectl-5321 deletion completed in 6.107547662s

• [SLOW TEST:10.974 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:07:04.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:07:08.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7192" for this suite.
Apr 29 20:07:14.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:07:14.683: INFO: namespace emptydir-wrapper-7192 deletion completed in 6.099014079s

• [SLOW TEST:10.289 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:07:14.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Apr 29 20:07:19.292: INFO: Successfully updated pod "pod-update-activedeadlineseconds-31c66c70-fd6b-4087-b9a2-1505953c487c"
Apr 29 20:07:19.293: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-31c66c70-fd6b-4087-b9a2-1505953c487c" in namespace "pods-7894" to be "terminated due to deadline exceeded"
Apr 29 20:07:19.300: INFO: Pod "pod-update-activedeadlineseconds-31c66c70-fd6b-4087-b9a2-1505953c487c": Phase="Running", Reason="", readiness=true. Elapsed: 7.66261ms
Apr 29 20:07:21.304: INFO: Pod "pod-update-activedeadlineseconds-31c66c70-fd6b-4087-b9a2-1505953c487c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.011279663s
Apr 29 20:07:21.304: INFO: Pod "pod-update-activedeadlineseconds-31c66c70-fd6b-4087-b9a2-1505953c487c" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:07:21.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7894" for this suite.
Apr 29 20:07:27.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:07:27.408: INFO: namespace pods-7894 deletion completed in 6.101260812s

• [SLOW TEST:12.724 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:07:27.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:07:27.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-444" for this suite.
Apr 29 20:07:49.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:07:49.698: INFO: namespace pods-444 deletion completed in 22.115374356s

• [SLOW TEST:22.290 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:07:49.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Apr 29 20:07:49.760: INFO: Waiting up to 5m0s for pod "pod-eb10c570-a08b-4b61-b5fb-74118844b0ed" in namespace "emptydir-4037" to be "success or failure"
Apr 29 20:07:49.764: INFO: Pod "pod-eb10c570-a08b-4b61-b5fb-74118844b0ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.809054ms
Apr 29 20:07:51.768: INFO: Pod "pod-eb10c570-a08b-4b61-b5fb-74118844b0ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007513672s
Apr 29 20:07:53.772: INFO: Pod "pod-eb10c570-a08b-4b61-b5fb-74118844b0ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011663508s
STEP: Saw pod success
Apr 29 20:07:53.772: INFO: Pod "pod-eb10c570-a08b-4b61-b5fb-74118844b0ed" satisfied condition "success or failure"
Apr 29 20:07:53.775: INFO: Trying to get logs from node iruya-worker2 pod pod-eb10c570-a08b-4b61-b5fb-74118844b0ed container test-container: 
STEP: delete the pod
Apr 29 20:07:53.841: INFO: Waiting for pod pod-eb10c570-a08b-4b61-b5fb-74118844b0ed to disappear
Apr 29 20:07:53.854: INFO: Pod pod-eb10c570-a08b-4b61-b5fb-74118844b0ed no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:07:53.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4037" for this suite.
Apr 29 20:07:59.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:07:59.967: INFO: namespace emptydir-4037 deletion completed in 6.109214497s

• [SLOW TEST:10.268 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:07:59.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Apr 29 20:08:00.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2850'
Apr 29 20:08:00.301: INFO: stderr: ""
Apr 29 20:08:00.301: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Apr 29 20:08:01.305: INFO: Selector matched 1 pods for map[app:redis]
Apr 29 20:08:01.305: INFO: Found 0 / 1
Apr 29 20:08:02.305: INFO: Selector matched 1 pods for map[app:redis]
Apr 29 20:08:02.305: INFO: Found 0 / 1
Apr 29 20:08:03.305: INFO: Selector matched 1 pods for map[app:redis]
Apr 29 20:08:03.305: INFO: Found 1 / 1
Apr 29 20:08:03.305: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Apr 29 20:08:03.309: INFO: Selector matched 1 pods for map[app:redis]
Apr 29 20:08:03.309: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Apr 29 20:08:03.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-x925c --namespace=kubectl-2850 -p {"metadata":{"annotations":{"x":"y"}}}'
Apr 29 20:08:03.408: INFO: stderr: ""
Apr 29 20:08:03.408: INFO: stdout: "pod/redis-master-x925c patched\n"
STEP: checking annotations
Apr 29 20:08:03.420: INFO: Selector matched 1 pods for map[app:redis]
Apr 29 20:08:03.420: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:08:03.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2850" for this suite.
Apr 29 20:08:25.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:08:25.538: INFO: namespace kubectl-2850 deletion completed in 22.113644947s

• [SLOW TEST:25.570 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:08:25.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Apr 29 20:08:29.650: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:08:29.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4681" for this suite.
Apr 29 20:08:35.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:08:35.865: INFO: namespace container-runtime-4681 deletion completed in 6.12376843s

• [SLOW TEST:10.327 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:08:35.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-b4f243d5-c5e8-47f6-97c7-e3d73c9711e0
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-b4f243d5-c5e8-47f6-97c7-e3d73c9711e0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:08:42.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4514" for this suite.
Apr 29 20:09:04.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:09:04.165: INFO: namespace configmap-4514 deletion completed in 22.105619879s

• [SLOW TEST:28.300 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:09:04.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0429 20:09:05.348326       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Apr 29 20:09:05.348: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:09:05.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-593" for this suite.
Apr 29 20:09:11.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:09:11.445: INFO: namespace gc-593 deletion completed in 6.093109492s

• [SLOW TEST:7.279 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:09:11.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Apr 29 20:09:11.548: INFO: Waiting up to 5m0s for pod "var-expansion-9206edc3-e97e-4a15-8f6e-98f8934070c7" in namespace "var-expansion-5959" to be "success or failure"
Apr 29 20:09:11.565: INFO: Pod "var-expansion-9206edc3-e97e-4a15-8f6e-98f8934070c7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.210665ms
Apr 29 20:09:13.569: INFO: Pod "var-expansion-9206edc3-e97e-4a15-8f6e-98f8934070c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021317336s
Apr 29 20:09:15.574: INFO: Pod "var-expansion-9206edc3-e97e-4a15-8f6e-98f8934070c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025725338s
STEP: Saw pod success
Apr 29 20:09:15.574: INFO: Pod "var-expansion-9206edc3-e97e-4a15-8f6e-98f8934070c7" satisfied condition "success or failure"
Apr 29 20:09:15.577: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-9206edc3-e97e-4a15-8f6e-98f8934070c7 container dapi-container: 
STEP: delete the pod
Apr 29 20:09:15.614: INFO: Waiting for pod var-expansion-9206edc3-e97e-4a15-8f6e-98f8934070c7 to disappear
Apr 29 20:09:15.649: INFO: Pod var-expansion-9206edc3-e97e-4a15-8f6e-98f8934070c7 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:09:15.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5959" for this suite.
Apr 29 20:09:21.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:09:21.789: INFO: namespace var-expansion-5959 deletion completed in 6.134126906s

• [SLOW TEST:10.343 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:09:21.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Apr 29 20:09:26.364: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1047 pod-service-account-846bd25d-7318-4b9d-817c-e8982531c842 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Apr 29 20:09:29.208: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1047 pod-service-account-846bd25d-7318-4b9d-817c-e8982531c842 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Apr 29 20:09:29.428: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1047 pod-service-account-846bd25d-7318-4b9d-817c-e8982531c842 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:09:29.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1047" for this suite.
Apr 29 20:09:35.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:09:35.787: INFO: namespace svcaccounts-1047 deletion completed in 6.107092927s

• [SLOW TEST:13.998 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:09:35.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:09:35.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2164" for this suite.
Apr 29 20:09:41.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:09:41.951: INFO: namespace services-2164 deletion completed in 6.088929614s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.164 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:09:41.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 29 20:09:42.028: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2392fa99-bfff-417f-baff-824586660c5b" in namespace "downward-api-5968" to be "success or failure"
Apr 29 20:09:42.033: INFO: Pod "downwardapi-volume-2392fa99-bfff-417f-baff-824586660c5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.909034ms
Apr 29 20:09:44.038: INFO: Pod "downwardapi-volume-2392fa99-bfff-417f-baff-824586660c5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009266551s
Apr 29 20:09:46.046: INFO: Pod "downwardapi-volume-2392fa99-bfff-417f-baff-824586660c5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017320579s
STEP: Saw pod success
Apr 29 20:09:46.046: INFO: Pod "downwardapi-volume-2392fa99-bfff-417f-baff-824586660c5b" satisfied condition "success or failure"
Apr 29 20:09:46.048: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2392fa99-bfff-417f-baff-824586660c5b container client-container: 
STEP: delete the pod
Apr 29 20:09:46.083: INFO: Waiting for pod downwardapi-volume-2392fa99-bfff-417f-baff-824586660c5b to disappear
Apr 29 20:09:46.112: INFO: Pod downwardapi-volume-2392fa99-bfff-417f-baff-824586660c5b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:09:46.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5968" for this suite.
Apr 29 20:09:52.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:09:52.270: INFO: namespace downward-api-5968 deletion completed in 6.151656925s

• [SLOW TEST:10.319 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:09:52.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 29 20:09:52.363: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Apr 29 20:09:54.440: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:09:55.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3010" for this suite.
Apr 29 20:10:01.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:10:01.953: INFO: namespace replication-controller-3010 deletion completed in 6.386293783s

• [SLOW TEST:9.683 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:10:01.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5080
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 20:10:02.021: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Apr 29 20:10:28.282: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.29:8080/dial?request=hostName&protocol=udp&host=10.244.1.28&port=8081&tries=1'] Namespace:pod-network-test-5080 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:10:28.282: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:10:28.440: INFO: Waiting for endpoints: map[]
Apr 29 20:10:28.443: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.29:8080/dial?request=hostName&protocol=udp&host=10.244.2.118&port=8081&tries=1'] Namespace:pod-network-test-5080 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:10:28.443: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:10:28.578: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:10:28.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5080" for this suite.
Apr 29 20:10:52.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:10:52.692: INFO: namespace pod-network-test-5080 deletion completed in 24.108852456s

• [SLOW TEST:50.739 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:10:52.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 29 20:10:52.786: INFO: Waiting up to 5m0s for pod "downwardapi-volume-809ab564-9771-4507-8384-1098ee316b83" in namespace "downward-api-3943" to be "success or failure"
Apr 29 20:10:52.789: INFO: Pod "downwardapi-volume-809ab564-9771-4507-8384-1098ee316b83": Phase="Pending", Reason="", readiness=false. Elapsed: 3.157842ms
Apr 29 20:10:54.793: INFO: Pod "downwardapi-volume-809ab564-9771-4507-8384-1098ee316b83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007025582s
Apr 29 20:10:56.797: INFO: Pod "downwardapi-volume-809ab564-9771-4507-8384-1098ee316b83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010733555s
STEP: Saw pod success
Apr 29 20:10:56.797: INFO: Pod "downwardapi-volume-809ab564-9771-4507-8384-1098ee316b83" satisfied condition "success or failure"
Apr 29 20:10:56.799: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-809ab564-9771-4507-8384-1098ee316b83 container client-container: 
STEP: delete the pod
Apr 29 20:10:56.832: INFO: Waiting for pod downwardapi-volume-809ab564-9771-4507-8384-1098ee316b83 to disappear
Apr 29 20:10:56.837: INFO: Pod downwardapi-volume-809ab564-9771-4507-8384-1098ee316b83 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:10:56.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3943" for this suite.
Apr 29 20:11:02.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:11:02.980: INFO: namespace downward-api-3943 deletion completed in 6.139419632s

• [SLOW TEST:10.288 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:11:02.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-0702c2d1-bc13-46dc-89e8-78afad9dccd0
STEP: Creating a pod to test consume configMaps
Apr 29 20:11:03.083: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e6c43f42-e757-44a0-9e77-74fb763a8b36" in namespace "projected-847" to be "success or failure"
Apr 29 20:11:03.106: INFO: Pod "pod-projected-configmaps-e6c43f42-e757-44a0-9e77-74fb763a8b36": Phase="Pending", Reason="", readiness=false. Elapsed: 22.711176ms
Apr 29 20:11:05.113: INFO: Pod "pod-projected-configmaps-e6c43f42-e757-44a0-9e77-74fb763a8b36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030021752s
Apr 29 20:11:07.116: INFO: Pod "pod-projected-configmaps-e6c43f42-e757-44a0-9e77-74fb763a8b36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03277117s
STEP: Saw pod success
Apr 29 20:11:07.116: INFO: Pod "pod-projected-configmaps-e6c43f42-e757-44a0-9e77-74fb763a8b36" satisfied condition "success or failure"
Apr 29 20:11:07.118: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-e6c43f42-e757-44a0-9e77-74fb763a8b36 container projected-configmap-volume-test: 
STEP: delete the pod
Apr 29 20:11:07.270: INFO: Waiting for pod pod-projected-configmaps-e6c43f42-e757-44a0-9e77-74fb763a8b36 to disappear
Apr 29 20:11:07.281: INFO: Pod pod-projected-configmaps-e6c43f42-e757-44a0-9e77-74fb763a8b36 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:11:07.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-847" for this suite.
Apr 29 20:11:13.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:11:13.394: INFO: namespace projected-847 deletion completed in 6.109199562s

• [SLOW TEST:10.413 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:11:13.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Apr 29 20:11:18.012: INFO: Successfully updated pod "pod-update-ebf83dcb-385c-41e5-a3d3-35eb39bf2904"
STEP: verifying the updated pod is in kubernetes
Apr 29 20:11:18.018: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:11:18.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7325" for this suite.
Apr 29 20:11:40.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:11:40.115: INFO: namespace pods-7325 deletion completed in 22.09365996s

• [SLOW TEST:26.721 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:11:40.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-cc6771f6-89e8-4d28-8696-bda401b87593
STEP: Creating configMap with name cm-test-opt-upd-9fcf8bec-91dc-437c-a474-1418060ffadf
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-cc6771f6-89e8-4d28-8696-bda401b87593
STEP: Updating configmap cm-test-opt-upd-9fcf8bec-91dc-437c-a474-1418060ffadf
STEP: Creating configMap with name cm-test-opt-create-09734ec4-db4b-4823-9874-ff2ad7883216
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:11:48.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6235" for this suite.
Apr 29 20:12:10.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:12:10.478: INFO: namespace configmap-6235 deletion completed in 22.165381758s

• [SLOW TEST:30.361 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:12:10.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-f864a16c-9788-4557-96c8-8b03cffd08ee
STEP: Creating a pod to test consume secrets
Apr 29 20:12:10.551: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee36b7a0-dced-4292-b50d-6500d044a364" in namespace "projected-2" to be "success or failure"
Apr 29 20:12:10.555: INFO: Pod "pod-projected-secrets-ee36b7a0-dced-4292-b50d-6500d044a364": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118967ms
Apr 29 20:12:12.559: INFO: Pod "pod-projected-secrets-ee36b7a0-dced-4292-b50d-6500d044a364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008169654s
Apr 29 20:12:14.565: INFO: Pod "pod-projected-secrets-ee36b7a0-dced-4292-b50d-6500d044a364": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013532941s
STEP: Saw pod success
Apr 29 20:12:14.565: INFO: Pod "pod-projected-secrets-ee36b7a0-dced-4292-b50d-6500d044a364" satisfied condition "success or failure"
Apr 29 20:12:14.567: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-ee36b7a0-dced-4292-b50d-6500d044a364 container projected-secret-volume-test: 
STEP: delete the pod
Apr 29 20:12:14.583: INFO: Waiting for pod pod-projected-secrets-ee36b7a0-dced-4292-b50d-6500d044a364 to disappear
Apr 29 20:12:14.587: INFO: Pod pod-projected-secrets-ee36b7a0-dced-4292-b50d-6500d044a364 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:12:14.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2" for this suite.
Apr 29 20:12:20.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:12:20.690: INFO: namespace projected-2 deletion completed in 6.10074019s

• [SLOW TEST:10.212 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:12:20.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Apr 29 20:12:20.785: INFO: Waiting up to 5m0s for pod "client-containers-c168d2a7-9bf2-4d46-853f-1f081b58c6c9" in namespace "containers-8403" to be "success or failure"
Apr 29 20:12:20.791: INFO: Pod "client-containers-c168d2a7-9bf2-4d46-853f-1f081b58c6c9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.654487ms
Apr 29 20:12:22.794: INFO: Pod "client-containers-c168d2a7-9bf2-4d46-853f-1f081b58c6c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008570493s
Apr 29 20:12:24.797: INFO: Pod "client-containers-c168d2a7-9bf2-4d46-853f-1f081b58c6c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012204029s
STEP: Saw pod success
Apr 29 20:12:24.797: INFO: Pod "client-containers-c168d2a7-9bf2-4d46-853f-1f081b58c6c9" satisfied condition "success or failure"
Apr 29 20:12:24.799: INFO: Trying to get logs from node iruya-worker pod client-containers-c168d2a7-9bf2-4d46-853f-1f081b58c6c9 container test-container: 
STEP: delete the pod
Apr 29 20:12:24.828: INFO: Waiting for pod client-containers-c168d2a7-9bf2-4d46-853f-1f081b58c6c9 to disappear
Apr 29 20:12:24.839: INFO: Pod client-containers-c168d2a7-9bf2-4d46-853f-1f081b58c6c9 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:12:24.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8403" for this suite.
Apr 29 20:12:30.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:12:30.950: INFO: namespace containers-8403 deletion completed in 6.108879662s

• [SLOW TEST:10.260 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:12:30.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Apr 29 20:12:31.038: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 20:12:31.042: INFO: Number of nodes with available pods: 0
Apr 29 20:12:31.042: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 20:12:32.187: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 20:12:32.189: INFO: Number of nodes with available pods: 0
Apr 29 20:12:32.189: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 20:12:33.061: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 20:12:33.063: INFO: Number of nodes with available pods: 0
Apr 29 20:12:33.063: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 20:12:34.046: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 20:12:34.101: INFO: Number of nodes with available pods: 0
Apr 29 20:12:34.101: INFO: Node iruya-worker is running more than one daemon pod
Apr 29 20:12:35.047: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 20:12:35.050: INFO: Number of nodes with available pods: 2
Apr 29 20:12:35.050: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Apr 29 20:12:35.065: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 20:12:35.070: INFO: Number of nodes with available pods: 1
Apr 29 20:12:35.071: INFO: Node iruya-worker2 is running more than one daemon pod
Apr 29 20:12:36.090: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 20:12:36.093: INFO: Number of nodes with available pods: 1
Apr 29 20:12:36.093: INFO: Node iruya-worker2 is running more than one daemon pod
Apr 29 20:12:37.169: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 20:12:37.174: INFO: Number of nodes with available pods: 1
Apr 29 20:12:37.174: INFO: Node iruya-worker2 is running more than one daemon pod
Apr 29 20:12:38.084: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 20:12:38.087: INFO: Number of nodes with available pods: 1
Apr 29 20:12:38.087: INFO: Node iruya-worker2 is running more than one daemon pod
Apr 29 20:12:39.076: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Apr 29 20:12:39.080: INFO: Number of nodes with available pods: 2
Apr 29 20:12:39.080: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3270, will wait for the garbage collector to delete the pods
Apr 29 20:12:39.146: INFO: Deleting DaemonSet.extensions daemon-set took: 6.289599ms
Apr 29 20:12:39.546: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.283144ms
Apr 29 20:12:49.350: INFO: Number of nodes with available pods: 0
Apr 29 20:12:49.350: INFO: Number of running nodes: 0, number of available pods: 0
Apr 29 20:12:49.353: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3270/daemonsets","resourceVersion":"2894154"},"items":null}

Apr 29 20:12:49.355: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3270/pods","resourceVersion":"2894154"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:12:49.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3270" for this suite.
Apr 29 20:12:55.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:12:55.469: INFO: namespace daemonsets-3270 deletion completed in 6.099194816s

• [SLOW TEST:24.518 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:12:55.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-480e4537-8330-45ba-9f9b-824fd6b5f0b2
STEP: Creating a pod to test consume secrets
Apr 29 20:12:55.697: INFO: Waiting up to 5m0s for pod "pod-secrets-2242b0bd-f5f8-4252-8c93-5755e27d0227" in namespace "secrets-5254" to be "success or failure"
Apr 29 20:12:55.714: INFO: Pod "pod-secrets-2242b0bd-f5f8-4252-8c93-5755e27d0227": Phase="Pending", Reason="", readiness=false. Elapsed: 16.960347ms
Apr 29 20:12:57.773: INFO: Pod "pod-secrets-2242b0bd-f5f8-4252-8c93-5755e27d0227": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075814561s
Apr 29 20:12:59.778: INFO: Pod "pod-secrets-2242b0bd-f5f8-4252-8c93-5755e27d0227": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080414634s
STEP: Saw pod success
Apr 29 20:12:59.778: INFO: Pod "pod-secrets-2242b0bd-f5f8-4252-8c93-5755e27d0227" satisfied condition "success or failure"
Apr 29 20:12:59.781: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-2242b0bd-f5f8-4252-8c93-5755e27d0227 container secret-volume-test: 
STEP: delete the pod
Apr 29 20:12:59.823: INFO: Waiting for pod pod-secrets-2242b0bd-f5f8-4252-8c93-5755e27d0227 to disappear
Apr 29 20:12:59.839: INFO: Pod pod-secrets-2242b0bd-f5f8-4252-8c93-5755e27d0227 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:12:59.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5254" for this suite.
Apr 29 20:13:05.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:13:05.992: INFO: namespace secrets-5254 deletion completed in 6.149689404s
STEP: Destroying namespace "secret-namespace-4195" for this suite.
Apr 29 20:13:12.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:13:12.131: INFO: namespace secret-namespace-4195 deletion completed in 6.138732628s

• [SLOW TEST:16.662 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:13:12.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-3b11a3dc-cc42-4a0e-9290-01df2eca68d8
STEP: Creating a pod to test consume secrets
Apr 29 20:13:12.230: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4ed265a0-ee5a-4a07-b17c-5be932b7ff5b" in namespace "projected-3937" to be "success or failure"
Apr 29 20:13:12.246: INFO: Pod "pod-projected-secrets-4ed265a0-ee5a-4a07-b17c-5be932b7ff5b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.889702ms
Apr 29 20:13:14.414: INFO: Pod "pod-projected-secrets-4ed265a0-ee5a-4a07-b17c-5be932b7ff5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184061574s
Apr 29 20:13:16.426: INFO: Pod "pod-projected-secrets-4ed265a0-ee5a-4a07-b17c-5be932b7ff5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.196370171s
STEP: Saw pod success
Apr 29 20:13:16.426: INFO: Pod "pod-projected-secrets-4ed265a0-ee5a-4a07-b17c-5be932b7ff5b" satisfied condition "success or failure"
Apr 29 20:13:16.429: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-4ed265a0-ee5a-4a07-b17c-5be932b7ff5b container projected-secret-volume-test: 
STEP: delete the pod
Apr 29 20:13:16.467: INFO: Waiting for pod pod-projected-secrets-4ed265a0-ee5a-4a07-b17c-5be932b7ff5b to disappear
Apr 29 20:13:16.485: INFO: Pod pod-projected-secrets-4ed265a0-ee5a-4a07-b17c-5be932b7ff5b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:13:16.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3937" for this suite.
Apr 29 20:13:22.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:13:22.579: INFO: namespace projected-3937 deletion completed in 6.091334716s

• [SLOW TEST:10.448 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:13:22.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 29 20:13:22.656: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aac58d8c-ffc7-4b97-b98e-b5d2561ee061" in namespace "projected-2076" to be "success or failure"
Apr 29 20:13:22.659: INFO: Pod "downwardapi-volume-aac58d8c-ffc7-4b97-b98e-b5d2561ee061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.846911ms
Apr 29 20:13:24.767: INFO: Pod "downwardapi-volume-aac58d8c-ffc7-4b97-b98e-b5d2561ee061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111211047s
Apr 29 20:13:26.772: INFO: Pod "downwardapi-volume-aac58d8c-ffc7-4b97-b98e-b5d2561ee061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116163336s
STEP: Saw pod success
Apr 29 20:13:26.772: INFO: Pod "downwardapi-volume-aac58d8c-ffc7-4b97-b98e-b5d2561ee061" satisfied condition "success or failure"
Apr 29 20:13:26.775: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-aac58d8c-ffc7-4b97-b98e-b5d2561ee061 container client-container: 
STEP: delete the pod
Apr 29 20:13:26.793: INFO: Waiting for pod downwardapi-volume-aac58d8c-ffc7-4b97-b98e-b5d2561ee061 to disappear
Apr 29 20:13:26.798: INFO: Pod downwardapi-volume-aac58d8c-ffc7-4b97-b98e-b5d2561ee061 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:13:26.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2076" for this suite.
Apr 29 20:13:32.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:13:32.898: INFO: namespace projected-2076 deletion completed in 6.098039975s

• [SLOW TEST:10.319 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:13:32.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 29 20:13:55.023: INFO: Container started at 2021-04-29 20:13:35 +0000 UTC, pod became ready at 2021-04-29 20:13:54 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:13:55.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6520" for this suite.
Apr 29 20:14:17.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:14:17.131: INFO: namespace container-probe-6520 deletion completed in 22.104015161s

• [SLOW TEST:44.231 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:14:17.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Apr 29 20:14:21.235: INFO: Pod pod-hostip-9ca3b83c-5059-4ff1-b613-629366d83a36 has hostIP: 172.18.0.3
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:14:21.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6118" for this suite.
Apr 29 20:14:43.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:14:43.341: INFO: namespace pods-6118 deletion completed in 22.102259309s

• [SLOW TEST:26.210 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:14:43.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-73358cfc-bce1-4bf3-8053-d085a04cc7f1
STEP: Creating a pod to test consume configMaps
Apr 29 20:14:43.405: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4f162a5d-5e9d-4795-9b24-ffb5b604acc3" in namespace "projected-441" to be "success or failure"
Apr 29 20:14:43.422: INFO: Pod "pod-projected-configmaps-4f162a5d-5e9d-4795-9b24-ffb5b604acc3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.130892ms
Apr 29 20:14:45.425: INFO: Pod "pod-projected-configmaps-4f162a5d-5e9d-4795-9b24-ffb5b604acc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019833778s
Apr 29 20:14:47.429: INFO: Pod "pod-projected-configmaps-4f162a5d-5e9d-4795-9b24-ffb5b604acc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024034122s
STEP: Saw pod success
Apr 29 20:14:47.429: INFO: Pod "pod-projected-configmaps-4f162a5d-5e9d-4795-9b24-ffb5b604acc3" satisfied condition "success or failure"
Apr 29 20:14:47.432: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-4f162a5d-5e9d-4795-9b24-ffb5b604acc3 container projected-configmap-volume-test: 
STEP: delete the pod
Apr 29 20:14:47.616: INFO: Waiting for pod pod-projected-configmaps-4f162a5d-5e9d-4795-9b24-ffb5b604acc3 to disappear
Apr 29 20:14:47.726: INFO: Pod pod-projected-configmaps-4f162a5d-5e9d-4795-9b24-ffb5b604acc3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:14:47.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-441" for this suite.
Apr 29 20:14:53.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:14:53.907: INFO: namespace projected-441 deletion completed in 6.168137691s

• [SLOW TEST:10.566 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:14:53.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Apr 29 20:14:53.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2121'
Apr 29 20:14:54.334: INFO: stderr: ""
Apr 29 20:14:54.335: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Apr 29 20:14:55.339: INFO: Selector matched 1 pods for map[app:redis]
Apr 29 20:14:55.339: INFO: Found 0 / 1
Apr 29 20:14:56.427: INFO: Selector matched 1 pods for map[app:redis]
Apr 29 20:14:56.427: INFO: Found 0 / 1
Apr 29 20:14:57.339: INFO: Selector matched 1 pods for map[app:redis]
Apr 29 20:14:57.339: INFO: Found 0 / 1
Apr 29 20:14:58.339: INFO: Selector matched 1 pods for map[app:redis]
Apr 29 20:14:58.339: INFO: Found 1 / 1
Apr 29 20:14:58.339: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Apr 29 20:14:58.342: INFO: Selector matched 1 pods for map[app:redis]
Apr 29 20:14:58.342: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Apr 29 20:14:58.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-24pgd redis-master --namespace=kubectl-2121'
Apr 29 20:14:58.452: INFO: stderr: ""
Apr 29 20:14:58.452: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 29 Apr 20:14:57.467 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Apr 20:14:57.467 # Server started, Redis version 3.2.12\n1:M 29 Apr 20:14:57.468 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Apr 20:14:57.468 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Apr 29 20:14:58.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-24pgd redis-master --namespace=kubectl-2121 --tail=1'
Apr 29 20:14:58.559: INFO: stderr: ""
Apr 29 20:14:58.559: INFO: stdout: "1:M 29 Apr 20:14:57.468 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Apr 29 20:14:58.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-24pgd redis-master --namespace=kubectl-2121 --limit-bytes=1'
Apr 29 20:14:58.675: INFO: stderr: ""
Apr 29 20:14:58.675: INFO: stdout: " "
STEP: exposing timestamps
Apr 29 20:14:58.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-24pgd redis-master --namespace=kubectl-2121 --tail=1 --timestamps'
Apr 29 20:14:58.779: INFO: stderr: ""
Apr 29 20:14:58.779: INFO: stdout: "2021-04-29T20:14:57.468309146Z 1:M 29 Apr 20:14:57.468 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Apr 29 20:15:01.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-24pgd redis-master --namespace=kubectl-2121 --since=1s'
Apr 29 20:15:01.381: INFO: stderr: ""
Apr 29 20:15:01.381: INFO: stdout: ""
Apr 29 20:15:01.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-24pgd redis-master --namespace=kubectl-2121 --since=24h'
Apr 29 20:15:01.484: INFO: stderr: ""
Apr 29 20:15:01.484: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 29 Apr 20:14:57.467 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Apr 20:14:57.467 # Server started, Redis version 3.2.12\n1:M 29 Apr 20:14:57.468 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Apr 20:14:57.468 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Apr 29 20:15:01.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2121'
Apr 29 20:15:01.599: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 20:15:01.599: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Apr 29 20:15:01.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2121'
Apr 29 20:15:01.696: INFO: stderr: "No resources found.\n"
Apr 29 20:15:01.696: INFO: stdout: ""
Apr 29 20:15:01.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2121 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 29 20:15:01.783: INFO: stderr: ""
Apr 29 20:15:01.783: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:15:01.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2121" for this suite.
Apr 29 20:15:23.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:15:23.899: INFO: namespace kubectl-2121 deletion completed in 22.113676848s

• [SLOW TEST:29.992 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:15:23.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7605/configmap-test-f89d9fbf-268a-43f3-bf2f-c1658d8f16f4
STEP: Creating a pod to test consume configMaps
Apr 29 20:15:23.969: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4fa7c0d-8229-400e-ac90-761dddd78d5d" in namespace "configmap-7605" to be "success or failure"
Apr 29 20:15:23.972: INFO: Pod "pod-configmaps-d4fa7c0d-8229-400e-ac90-761dddd78d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.546981ms
Apr 29 20:15:25.977: INFO: Pod "pod-configmaps-d4fa7c0d-8229-400e-ac90-761dddd78d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007814614s
Apr 29 20:15:27.980: INFO: Pod "pod-configmaps-d4fa7c0d-8229-400e-ac90-761dddd78d5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01157054s
Apr 29 20:15:29.984: INFO: Pod "pod-configmaps-d4fa7c0d-8229-400e-ac90-761dddd78d5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01552521s
STEP: Saw pod success
Apr 29 20:15:29.985: INFO: Pod "pod-configmaps-d4fa7c0d-8229-400e-ac90-761dddd78d5d" satisfied condition "success or failure"
Apr 29 20:15:29.987: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d4fa7c0d-8229-400e-ac90-761dddd78d5d container env-test: 
STEP: delete the pod
Apr 29 20:15:30.033: INFO: Waiting for pod pod-configmaps-d4fa7c0d-8229-400e-ac90-761dddd78d5d to disappear
Apr 29 20:15:30.038: INFO: Pod pod-configmaps-d4fa7c0d-8229-400e-ac90-761dddd78d5d no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:15:30.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7605" for this suite.
Apr 29 20:15:36.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:15:36.148: INFO: namespace configmap-7605 deletion completed in 6.106348241s

• [SLOW TEST:12.248 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:15:36.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 29 20:15:36.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb14420b-c8eb-46c3-883a-eaf51a634292" in namespace "projected-8927" to be "success or failure"
Apr 29 20:15:36.242: INFO: Pod "downwardapi-volume-eb14420b-c8eb-46c3-883a-eaf51a634292": Phase="Pending", Reason="", readiness=false. Elapsed: 3.783338ms
Apr 29 20:15:38.246: INFO: Pod "downwardapi-volume-eb14420b-c8eb-46c3-883a-eaf51a634292": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008163075s
Apr 29 20:15:40.250: INFO: Pod "downwardapi-volume-eb14420b-c8eb-46c3-883a-eaf51a634292": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012608544s
STEP: Saw pod success
Apr 29 20:15:40.250: INFO: Pod "downwardapi-volume-eb14420b-c8eb-46c3-883a-eaf51a634292" satisfied condition "success or failure"
Apr 29 20:15:40.254: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-eb14420b-c8eb-46c3-883a-eaf51a634292 container client-container: 
STEP: delete the pod
Apr 29 20:15:40.273: INFO: Waiting for pod downwardapi-volume-eb14420b-c8eb-46c3-883a-eaf51a634292 to disappear
Apr 29 20:15:40.277: INFO: Pod downwardapi-volume-eb14420b-c8eb-46c3-883a-eaf51a634292 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:15:40.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8927" for this suite.
Apr 29 20:15:46.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:15:46.385: INFO: namespace projected-8927 deletion completed in 6.105158639s

• [SLOW TEST:10.236 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:15:46.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0429 20:16:26.555274       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Apr 29 20:16:26.555: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:16:26.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8101" for this suite.
Apr 29 20:16:34.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:16:34.781: INFO: namespace gc-8101 deletion completed in 8.221710862s

• [SLOW TEST:48.394 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:16:34.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Apr 29 20:16:35.159: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Apr 29 20:16:44.227: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:16:44.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5530" for this suite.
Apr 29 20:16:50.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:16:50.337: INFO: namespace pods-5530 deletion completed in 6.102336651s

• [SLOW TEST:15.556 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:16:50.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Apr 29 20:16:50.406: INFO: Waiting up to 5m0s for pod "pod-9d176338-d71f-41ce-9c7e-63b955f5775c" in namespace "emptydir-1393" to be "success or failure"
Apr 29 20:16:50.457: INFO: Pod "pod-9d176338-d71f-41ce-9c7e-63b955f5775c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.175326ms
Apr 29 20:16:52.461: INFO: Pod "pod-9d176338-d71f-41ce-9c7e-63b955f5775c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054637156s
Apr 29 20:16:54.465: INFO: Pod "pod-9d176338-d71f-41ce-9c7e-63b955f5775c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058544048s
STEP: Saw pod success
Apr 29 20:16:54.465: INFO: Pod "pod-9d176338-d71f-41ce-9c7e-63b955f5775c" satisfied condition "success or failure"
Apr 29 20:16:54.468: INFO: Trying to get logs from node iruya-worker pod pod-9d176338-d71f-41ce-9c7e-63b955f5775c container test-container: 
STEP: delete the pod
Apr 29 20:16:54.548: INFO: Waiting for pod pod-9d176338-d71f-41ce-9c7e-63b955f5775c to disappear
Apr 29 20:16:54.554: INFO: Pod pod-9d176338-d71f-41ce-9c7e-63b955f5775c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:16:54.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1393" for this suite.
Apr 29 20:17:00.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:17:00.665: INFO: namespace emptydir-1393 deletion completed in 6.108370614s

• [SLOW TEST:10.328 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:17:00.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Apr 29 20:17:00.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-380'
Apr 29 20:17:00.963: INFO: stderr: ""
Apr 29 20:17:00.963: INFO: stdout: "pod/pause created\n"
Apr 29 20:17:00.963: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Apr 29 20:17:00.963: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-380" to be "running and ready"
Apr 29 20:17:00.984: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 20.485509ms
Apr 29 20:17:03.068: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104658034s
Apr 29 20:17:05.072: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.108890508s
Apr 29 20:17:05.072: INFO: Pod "pause" satisfied condition "running and ready"
Apr 29 20:17:05.072: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Apr 29 20:17:05.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-380'
Apr 29 20:17:05.170: INFO: stderr: ""
Apr 29 20:17:05.170: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Apr 29 20:17:05.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-380'
Apr 29 20:17:05.272: INFO: stderr: ""
Apr 29 20:17:05.272: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Apr 29 20:17:05.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-380'
Apr 29 20:17:05.369: INFO: stderr: ""
Apr 29 20:17:05.369: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Apr 29 20:17:05.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-380'
Apr 29 20:17:05.470: INFO: stderr: ""
Apr 29 20:17:05.470: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Apr 29 20:17:05.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-380'
Apr 29 20:17:05.614: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 20:17:05.614: INFO: stdout: "pod \"pause\" force deleted\n"
Apr 29 20:17:05.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-380'
Apr 29 20:17:05.969: INFO: stderr: "No resources found.\n"
Apr 29 20:17:05.969: INFO: stdout: ""
Apr 29 20:17:05.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-380 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 29 20:17:06.146: INFO: stderr: ""
Apr 29 20:17:06.146: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:17:06.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-380" for this suite.
Apr 29 20:17:12.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:17:12.260: INFO: namespace kubectl-380 deletion completed in 6.109822276s

• [SLOW TEST:11.595 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:17:12.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Apr 29 20:17:12.317: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:17:12.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2764" for this suite.
Apr 29 20:17:18.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:17:18.508: INFO: namespace kubectl-2764 deletion completed in 6.097976412s

• [SLOW TEST:6.248 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:17:18.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6910
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Apr 29 20:17:18.630: INFO: Found 0 stateful pods, waiting for 3
Apr 29 20:17:28.635: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 20:17:28.635: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 20:17:28.635: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Apr 29 20:17:28.663: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Apr 29 20:17:38.699: INFO: Updating stateful set ss2
Apr 29 20:17:38.711: INFO: Waiting for Pod statefulset-6910/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Apr 29 20:17:48.892: INFO: Found 2 stateful pods, waiting for 3
Apr 29 20:17:58.897: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 20:17:58.897: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 20:17:58.897: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Apr 29 20:17:58.920: INFO: Updating stateful set ss2
Apr 29 20:17:58.951: INFO: Waiting for Pod statefulset-6910/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 29 20:18:08.977: INFO: Updating stateful set ss2
Apr 29 20:18:09.000: INFO: Waiting for StatefulSet statefulset-6910/ss2 to complete update
Apr 29 20:18:09.000: INFO: Waiting for Pod statefulset-6910/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 29 20:18:19.008: INFO: Waiting for StatefulSet statefulset-6910/ss2 to complete update
Apr 29 20:18:19.008: INFO: Waiting for Pod statefulset-6910/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Apr 29 20:18:29.006: INFO: Deleting all statefulset in ns statefulset-6910
Apr 29 20:18:29.009: INFO: Scaling statefulset ss2 to 0
Apr 29 20:18:49.566: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 20:18:49.569: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:18:49.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6910" for this suite.
Apr 29 20:18:57.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:18:57.846: INFO: namespace statefulset-6910 deletion completed in 8.181523738s

• [SLOW TEST:99.337 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:18:57.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:19:24.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-446" for this suite.
Apr 29 20:19:30.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:19:30.275: INFO: namespace namespaces-446 deletion completed in 6.122290067s
STEP: Destroying namespace "nsdeletetest-4663" for this suite.
Apr 29 20:19:30.277: INFO: Namespace nsdeletetest-4663 was already deleted
STEP: Destroying namespace "nsdeletetest-7536" for this suite.
Apr 29 20:19:36.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:19:36.389: INFO: namespace nsdeletetest-7536 deletion completed in 6.112087738s

• [SLOW TEST:38.543 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:19:36.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-gnwkx in namespace proxy-6188
I0429 20:19:36.503759       6 runners.go:180] Created replication controller with name: proxy-service-gnwkx, namespace: proxy-6188, replica count: 1
I0429 20:19:37.554175       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 20:19:38.554440       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 20:19:39.554666       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0429 20:19:40.554836       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 20:19:41.555045       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 20:19:42.555335       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 20:19:43.555560       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 20:19:44.555809       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 20:19:45.556028       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 20:19:46.556357       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 20:19:47.556566       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 20:19:48.556817       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0429 20:19:49.557083       6 runners.go:180] proxy-service-gnwkx Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Apr 29 20:19:49.560: INFO: setup took 13.107645687s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Apr 29 20:19:49.566: INFO: (0) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 5.901298ms)
Apr 29 20:19:49.566: INFO: (0) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 6.090571ms)
Apr 29 20:19:49.567: INFO: (0) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 6.37074ms)
Apr 29 20:19:49.567: INFO: (0) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:1080/proxy/: test<... (200; 7.204499ms)
Apr 29 20:19:49.568: INFO: (0) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 7.413137ms)
Apr 29 20:19:49.568: INFO: (0) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:1080/proxy/: ... (200; 7.828841ms)
Apr 29 20:19:49.568: INFO: (0) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 7.994971ms)
Apr 29 20:19:49.568: INFO: (0) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 7.901683ms)
Apr 29 20:19:49.570: INFO: (0) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 9.751841ms)
Apr 29 20:19:49.570: INFO: (0) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 9.662332ms)
Apr 29 20:19:49.570: INFO: (0) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 10.16046ms)
Apr 29 20:19:49.573: INFO: (0) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: ... (200; 2.777002ms)
Apr 29 20:19:49.579: INFO: (1) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 2.992264ms)
Apr 29 20:19:49.579: INFO: (1) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:1080/proxy/: test<... (200; 2.998529ms)
Apr 29 20:19:49.579: INFO: (1) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.334257ms)
Apr 29 20:19:49.579: INFO: (1) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.329401ms)
Apr 29 20:19:49.579: INFO: (1) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: test (200; 5.352859ms)
Apr 29 20:19:49.582: INFO: (1) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname2/proxy/: tls qux (200; 5.414842ms)
Apr 29 20:19:49.582: INFO: (1) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 5.462311ms)
Apr 29 20:19:49.582: INFO: (1) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 5.41577ms)
Apr 29 20:19:49.585: INFO: (2) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.21645ms)
Apr 29 20:19:49.585: INFO: (2) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 3.308384ms)
Apr 29 20:19:49.585: INFO: (2) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:1080/proxy/: test<... (200; 3.477721ms)
Apr 29 20:19:49.585: INFO: (2) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 3.572666ms)
Apr 29 20:19:49.586: INFO: (2) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.973142ms)
Apr 29 20:19:49.586: INFO: (2) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 4.487895ms)
Apr 29 20:19:49.586: INFO: (2) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 4.452239ms)
Apr 29 20:19:49.586: INFO: (2) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: ... (200; 4.710965ms)
Apr 29 20:19:49.590: INFO: (3) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.082666ms)
Apr 29 20:19:49.590: INFO: (3) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.151759ms)
Apr 29 20:19:49.590: INFO: (3) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 3.408201ms)
Apr 29 20:19:49.591: INFO: (3) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 4.005706ms)
Apr 29 20:19:49.591: INFO: (3) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:1080/proxy/: test<... (200; 4.244317ms)
Apr 29 20:19:49.591: INFO: (3) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: test (200; 4.948791ms)
Apr 29 20:19:49.592: INFO: (3) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 4.949541ms)
Apr 29 20:19:49.592: INFO: (3) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:1080/proxy/: ... (200; 5.114162ms)
Apr 29 20:19:49.592: INFO: (3) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 5.282147ms)
Apr 29 20:19:49.592: INFO: (3) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 5.710293ms)
Apr 29 20:19:49.592: INFO: (3) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 5.717283ms)
Apr 29 20:19:49.592: INFO: (3) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 5.735544ms)
Apr 29 20:19:49.595: INFO: (4) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 2.077569ms)
Apr 29 20:19:49.596: INFO: (4) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:1080/proxy/: test<... (200; 3.971686ms)
Apr 29 20:19:49.598: INFO: (4) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 5.341924ms)
Apr 29 20:19:49.599: INFO: (4) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 6.326962ms)
Apr 29 20:19:49.600: INFO: (4) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 7.034627ms)
Apr 29 20:19:49.600: INFO: (4) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname2/proxy/: tls qux (200; 7.036926ms)
Apr 29 20:19:49.600: INFO: (4) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:1080/proxy/: ... (200; 7.565646ms)
Apr 29 20:19:49.600: INFO: (4) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 7.766283ms)
Apr 29 20:19:49.601: INFO: (4) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 8.312191ms)
Apr 29 20:19:49.601: INFO: (4) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 8.308483ms)
Apr 29 20:19:49.601: INFO: (4) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 8.316211ms)
Apr 29 20:19:49.601: INFO: (4) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 8.43666ms)
Apr 29 20:19:49.601: INFO: (4) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 8.615391ms)
Apr 29 20:19:49.601: INFO: (4) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 8.625205ms)
Apr 29 20:19:49.601: INFO: (4) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: test<... (200; 4.085903ms)
Apr 29 20:19:49.606: INFO: (5) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 4.400907ms)
Apr 29 20:19:49.606: INFO: (5) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 4.488901ms)
Apr 29 20:19:49.606: INFO: (5) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: ... (200; 4.493209ms)
Apr 29 20:19:49.606: INFO: (5) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 4.573955ms)
Apr 29 20:19:49.606: INFO: (5) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 4.412254ms)
Apr 29 20:19:49.606: INFO: (5) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 4.484035ms)
Apr 29 20:19:49.606: INFO: (5) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 4.501364ms)
Apr 29 20:19:49.606: INFO: (5) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 4.420206ms)
Apr 29 20:19:49.607: INFO: (5) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 5.360181ms)
Apr 29 20:19:49.607: INFO: (5) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 5.585395ms)
Apr 29 20:19:49.610: INFO: (6) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.2769ms)
Apr 29 20:19:49.610: INFO: (6) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 3.221808ms)
Apr 29 20:19:49.610: INFO: (6) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 3.274674ms)
Apr 29 20:19:49.611: INFO: (6) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 3.287424ms)
Apr 29 20:19:49.611: INFO: (6) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: ... (200; 4.17875ms)
Apr 29 20:19:49.611: INFO: (6) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 4.211244ms)
Apr 29 20:19:49.612: INFO: (6) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 4.319903ms)
Apr 29 20:19:49.612: INFO: (6) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 4.633043ms)
Apr 29 20:19:49.612: INFO: (6) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:1080/proxy/: test<... (200; 4.635105ms)
Apr 29 20:19:49.613: INFO: (6) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 5.299082ms)
Apr 29 20:19:49.613: INFO: (6) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 5.875432ms)
Apr 29 20:19:49.613: INFO: (6) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 5.843968ms)
Apr 29 20:19:49.613: INFO: (6) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname2/proxy/: tls qux (200; 5.868847ms)
Apr 29 20:19:49.613: INFO: (6) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 5.958557ms)
Apr 29 20:19:49.613: INFO: (6) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 5.911713ms)
Apr 29 20:19:49.615: INFO: (7) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 2.025893ms)
Apr 29 20:19:49.616: INFO: (7) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:1080/proxy/: ... (200; 2.278343ms)
Apr 29 20:19:49.616: INFO: (7) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 2.299203ms)
Apr 29 20:19:49.616: INFO: (7) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 2.377876ms)
Apr 29 20:19:49.616: INFO: (7) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 2.37752ms)
Apr 29 20:19:49.616: INFO: (7) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:1080/proxy/: test<... (200; 2.837682ms)
Apr 29 20:19:49.617: INFO: (7) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 3.415876ms)
Apr 29 20:19:49.617: INFO: (7) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.543772ms)
Apr 29 20:19:49.617: INFO: (7) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 3.644656ms)
Apr 29 20:19:49.617: INFO: (7) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 3.81632ms)
Apr 29 20:19:49.617: INFO: (7) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 3.856917ms)
Apr 29 20:19:49.617: INFO: (7) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname2/proxy/: tls qux (200; 3.90598ms)
Apr 29 20:19:49.617: INFO: (7) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 3.939305ms)
Apr 29 20:19:49.617: INFO: (7) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 4.026286ms)
Apr 29 20:19:49.617: INFO: (7) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: ... (200; 4.445732ms)
Apr 29 20:19:49.622: INFO: (8) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 4.441665ms)
Apr 29 20:19:49.622: INFO: (8) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 4.455207ms)
Apr 29 20:19:49.622: INFO: (8) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 4.562974ms)
Apr 29 20:19:49.622: INFO: (8) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname2/proxy/: tls qux (200; 4.65521ms)
Apr 29 20:19:49.622: INFO: (8) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: test<... (200; 5.187507ms)
Apr 29 20:19:49.623: INFO: (8) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 5.213955ms)
Apr 29 20:19:49.623: INFO: (8) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 5.206329ms)
Apr 29 20:19:49.623: INFO: (8) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 5.314853ms)
Apr 29 20:19:49.623: INFO: (8) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 5.430398ms)
Apr 29 20:19:49.626: INFO: (9) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.082676ms)
Apr 29 20:19:49.626: INFO: (9) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 3.217267ms)
Apr 29 20:19:49.626: INFO: (9) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.239118ms)
Apr 29 20:19:49.626: INFO: (9) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:1080/proxy/: ... (200; 3.397238ms)
Apr 29 20:19:49.627: INFO: (9) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 3.621729ms)
Apr 29 20:19:49.627: INFO: (9) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: test<... (200; 3.831017ms)
Apr 29 20:19:49.627: INFO: (9) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 3.920062ms)
Apr 29 20:19:49.627: INFO: (9) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 3.884704ms)
Apr 29 20:19:49.627: INFO: (9) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.941853ms)
Apr 29 20:19:49.627: INFO: (9) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 4.006758ms)
Apr 29 20:19:49.628: INFO: (9) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 4.599726ms)
Apr 29 20:19:49.628: INFO: (9) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 5.088834ms)
Apr 29 20:19:49.628: INFO: (9) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 5.02817ms)
Apr 29 20:19:49.631: INFO: (10) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 2.87602ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.565634ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 3.613776ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: test<... (200; 3.69917ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 4.241679ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 4.203717ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 4.312341ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname2/proxy/: tls qux (200; 4.2429ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 4.386594ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 4.292076ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 4.377197ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 4.338405ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 4.309607ms)
Apr 29 20:19:49.632: INFO: (10) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:1080/proxy/: ... (200; 4.370485ms)
Apr 29 20:19:49.633: INFO: (10) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 4.465352ms)
Apr 29 20:19:49.635: INFO: (11) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 1.90289ms)
Apr 29 20:19:49.636: INFO: (11) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.347357ms)
Apr 29 20:19:49.636: INFO: (11) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 3.729328ms)
Apr 29 20:19:49.637: INFO: (11) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 4.407795ms)
Apr 29 20:19:49.637: INFO: (11) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 4.442272ms)
Apr 29 20:19:49.637: INFO: (11) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 4.370945ms)
Apr 29 20:19:49.637: INFO: (11) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: test<... (200; 4.449434ms)
Apr 29 20:19:49.637: INFO: (11) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 4.438898ms)
Apr 29 20:19:49.637: INFO: (11) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 4.53672ms)
Apr 29 20:19:49.637: INFO: (11) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 4.50654ms)
Apr 29 20:19:49.637: INFO: (11) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 4.452912ms)
Apr 29 20:19:49.637: INFO: (11) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 4.430776ms)
Apr 29 20:19:49.637: INFO: (11) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:1080/proxy/: ... (200; 4.450893ms)
Apr 29 20:19:49.637: INFO: (11) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 4.501952ms)
Apr 29 20:19:49.637: INFO: (11) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname2/proxy/: tls qux (200; 4.484954ms)
Apr 29 20:19:49.640: INFO: (12) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 2.677725ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 3.566727ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:1080/proxy/: ... (200; 3.530536ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 3.520511ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 3.613232ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.626781ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.493807ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: test<... (200; 3.702944ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.674465ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 4.013291ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 4.028381ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 4.034654ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 4.093055ms)
Apr 29 20:19:49.641: INFO: (12) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname2/proxy/: tls qux (200; 4.127699ms)
Apr 29 20:19:49.644: INFO: (13) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 2.789772ms)
Apr 29 20:19:49.645: INFO: (13) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.563885ms)
Apr 29 20:19:49.645: INFO: (13) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 3.657916ms)
Apr 29 20:19:49.645: INFO: (13) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.818997ms)
Apr 29 20:19:49.645: INFO: (13) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: test<... (200; 3.951932ms)
Apr 29 20:19:49.646: INFO: (13) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:1080/proxy/: ... (200; 3.988586ms)
Apr 29 20:19:49.646: INFO: (13) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 4.059299ms)
Apr 29 20:19:49.646: INFO: (13) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 4.646727ms)
Apr 29 20:19:49.646: INFO: (13) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 4.837229ms)
Apr 29 20:19:49.646: INFO: (13) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 4.860071ms)
Apr 29 20:19:49.646: INFO: (13) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 4.928956ms)
Apr 29 20:19:49.646: INFO: (13) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname2/proxy/: tls qux (200; 4.922459ms)
Apr 29 20:19:49.646: INFO: (13) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 4.954064ms)
Apr 29 20:19:49.649: INFO: (14) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 2.227941ms)
Apr 29 20:19:49.649: INFO: (14) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 2.644957ms)
Apr 29 20:19:49.649: INFO: (14) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 2.853354ms)
Apr 29 20:19:49.650: INFO: (14) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 3.038514ms)
Apr 29 20:19:49.650: INFO: (14) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:1080/proxy/: test<... (200; 3.129654ms)
Apr 29 20:19:49.650: INFO: (14) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 3.488783ms)
Apr 29 20:19:49.650: INFO: (14) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.847288ms)
Apr 29 20:19:49.650: INFO: (14) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: ... (200; 3.913889ms)
Apr 29 20:19:49.651: INFO: (14) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname2/proxy/: tls qux (200; 4.00135ms)
Apr 29 20:19:49.651: INFO: (14) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 3.909045ms)
Apr 29 20:19:49.653: INFO: (15) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:1080/proxy/: ... (200; 2.463456ms)
Apr 29 20:19:49.653: INFO: (15) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 2.667473ms)
Apr 29 20:19:49.653: INFO: (15) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 2.776958ms)
Apr 29 20:19:49.654: INFO: (15) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 2.930753ms)
Apr 29 20:19:49.654: INFO: (15) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 2.93999ms)
Apr 29 20:19:49.654: INFO: (15) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:1080/proxy/: test<... (200; 2.920904ms)
Apr 29 20:19:49.654: INFO: (15) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: test (200; 3.837985ms)
Apr 29 20:19:49.654: INFO: (15) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.759191ms)
Apr 29 20:19:49.655: INFO: (15) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 3.903416ms)
Apr 29 20:19:49.655: INFO: (15) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 3.931283ms)
Apr 29 20:19:49.655: INFO: (15) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname2/proxy/: tls qux (200; 4.090473ms)
Apr 29 20:19:49.655: INFO: (15) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 4.146174ms)
Apr 29 20:19:49.655: INFO: (15) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 4.125499ms)
Apr 29 20:19:49.655: INFO: (15) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 4.114721ms)
Apr 29 20:19:49.657: INFO: (16) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 2.178153ms)
Apr 29 20:19:49.658: INFO: (16) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:1080/proxy/: ... (200; 3.000813ms)
Apr 29 20:19:49.658: INFO: (16) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 3.040337ms)
Apr 29 20:19:49.658: INFO: (16) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 3.18587ms)
Apr 29 20:19:49.658: INFO: (16) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.216868ms)
Apr 29 20:19:49.658: INFO: (16) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.247948ms)
Apr 29 20:19:49.659: INFO: (16) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:1080/proxy/: test<... (200; 3.639999ms)
Apr 29 20:19:49.659: INFO: (16) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 3.698506ms)
Apr 29 20:19:49.659: INFO: (16) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 4.059509ms)
Apr 29 20:19:49.659: INFO: (16) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: test<... (200; 2.658917ms)
Apr 29 20:19:49.662: INFO: (17) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 2.844801ms)
Apr 29 20:19:49.663: INFO: (17) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.299616ms)
Apr 29 20:19:49.663: INFO: (17) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 3.346133ms)
Apr 29 20:19:49.663: INFO: (17) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 3.3451ms)
Apr 29 20:19:49.663: INFO: (17) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.351699ms)
Apr 29 20:19:49.663: INFO: (17) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: ... (200; 3.483255ms)
Apr 29 20:19:49.663: INFO: (17) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 3.84389ms)
Apr 29 20:19:49.663: INFO: (17) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 3.937406ms)
Apr 29 20:19:49.663: INFO: (17) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 4.068732ms)
Apr 29 20:19:49.663: INFO: (17) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname1/proxy/: foo (200; 4.053182ms)
Apr 29 20:19:49.663: INFO: (17) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname1/proxy/: foo (200; 4.132131ms)
Apr 29 20:19:49.665: INFO: (18) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 1.817622ms)
Apr 29 20:19:49.667: INFO: (18) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.068549ms)
Apr 29 20:19:49.667: INFO: (18) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.115569ms)
Apr 29 20:19:49.667: INFO: (18) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 3.309338ms)
Apr 29 20:19:49.667: INFO: (18) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:1080/proxy/: test<... (200; 3.355138ms)
Apr 29 20:19:49.667: INFO: (18) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:160/proxy/: foo (200; 3.569976ms)
Apr 29 20:19:49.667: INFO: (18) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:1080/proxy/: ... (200; 3.586787ms)
Apr 29 20:19:49.667: INFO: (18) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 3.648859ms)
Apr 29 20:19:49.667: INFO: (18) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: ... (200; 2.805542ms)
Apr 29 20:19:49.671: INFO: (19) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 1.965504ms)
Apr 29 20:19:49.671: INFO: (19) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:443/proxy/: test<... (200; 3.200631ms)
Apr 29 20:19:49.672: INFO: (19) /api/v1/namespaces/proxy-6188/services/http:proxy-service-gnwkx:portname2/proxy/: bar (200; 3.607997ms)
Apr 29 20:19:49.672: INFO: (19) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname1/proxy/: tls baz (200; 3.703594ms)
Apr 29 20:19:49.672: INFO: (19) /api/v1/namespaces/proxy-6188/pods/proxy-service-gnwkx-scd2q/proxy/: test (200; 3.603862ms)
Apr 29 20:19:49.672: INFO: (19) /api/v1/namespaces/proxy-6188/pods/http:proxy-service-gnwkx-scd2q:162/proxy/: bar (200; 3.946085ms)
Apr 29 20:19:49.672: INFO: (19) /api/v1/namespaces/proxy-6188/services/proxy-service-gnwkx:portname2/proxy/: bar (200; 3.223449ms)
Apr 29 20:19:49.672: INFO: (19) /api/v1/namespaces/proxy-6188/services/https:proxy-service-gnwkx:tlsportname2/proxy/: tls qux (200; 3.729011ms)
Apr 29 20:19:49.672: INFO: (19) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:460/proxy/: tls baz (200; 3.615456ms)
Apr 29 20:19:49.672: INFO: (19) /api/v1/namespaces/proxy-6188/pods/https:proxy-service-gnwkx-scd2q:462/proxy/: tls qux (200; 4.020935ms)
STEP: deleting ReplicationController proxy-service-gnwkx in namespace proxy-6188, will wait for the garbage collector to delete the pods
Apr 29 20:19:49.730: INFO: Deleting ReplicationController proxy-service-gnwkx took: 6.171822ms
Apr 29 20:19:50.030: INFO: Terminating ReplicationController proxy-service-gnwkx pods took: 300.276178ms
[AfterEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:19:52.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6188" for this suite.
Apr 29 20:19:58.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:19:58.331: INFO: namespace proxy-6188 deletion completed in 6.088516484s

• [SLOW TEST:21.942 seconds]
[sig-network] Proxy
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:19:58.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-5286101a-dee9-45ae-b2df-65fd3569b51c
STEP: Creating a pod to test consume configMaps
Apr 29 20:19:58.427: INFO: Waiting up to 5m0s for pod "pod-configmaps-ebda6867-5de7-4ce6-a8e2-c980e130f22e" in namespace "configmap-5338" to be "success or failure"
Apr 29 20:19:58.430: INFO: Pod "pod-configmaps-ebda6867-5de7-4ce6-a8e2-c980e130f22e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.194241ms
Apr 29 20:20:00.434: INFO: Pod "pod-configmaps-ebda6867-5de7-4ce6-a8e2-c980e130f22e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007318311s
Apr 29 20:20:02.438: INFO: Pod "pod-configmaps-ebda6867-5de7-4ce6-a8e2-c980e130f22e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011125635s
STEP: Saw pod success
Apr 29 20:20:02.438: INFO: Pod "pod-configmaps-ebda6867-5de7-4ce6-a8e2-c980e130f22e" satisfied condition "success or failure"
Apr 29 20:20:02.441: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ebda6867-5de7-4ce6-a8e2-c980e130f22e container configmap-volume-test: 
STEP: delete the pod
Apr 29 20:20:02.456: INFO: Waiting for pod pod-configmaps-ebda6867-5de7-4ce6-a8e2-c980e130f22e to disappear
Apr 29 20:20:02.461: INFO: Pod pod-configmaps-ebda6867-5de7-4ce6-a8e2-c980e130f22e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:20:02.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5338" for this suite.
Apr 29 20:20:08.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:20:08.562: INFO: namespace configmap-5338 deletion completed in 6.093429835s

• [SLOW TEST:10.230 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:20:08.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-f8e77655-b5e2-4fcb-9d16-96f40b1c891c
STEP: Creating secret with name secret-projected-all-test-volume-cdef4054-130c-43c0-8484-acde2d5b84b3
STEP: Creating a pod to test Check all projections for projected volume plugin
Apr 29 20:20:08.677: INFO: Waiting up to 5m0s for pod "projected-volume-0d9f1004-2259-4898-b437-80eaeae2aeab" in namespace "projected-1602" to be "success or failure"
Apr 29 20:20:08.689: INFO: Pod "projected-volume-0d9f1004-2259-4898-b437-80eaeae2aeab": Phase="Pending", Reason="", readiness=false. Elapsed: 11.561729ms
Apr 29 20:20:10.694: INFO: Pod "projected-volume-0d9f1004-2259-4898-b437-80eaeae2aeab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016102857s
Apr 29 20:20:12.698: INFO: Pod "projected-volume-0d9f1004-2259-4898-b437-80eaeae2aeab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020580933s
STEP: Saw pod success
Apr 29 20:20:12.698: INFO: Pod "projected-volume-0d9f1004-2259-4898-b437-80eaeae2aeab" satisfied condition "success or failure"
Apr 29 20:20:12.701: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-0d9f1004-2259-4898-b437-80eaeae2aeab container projected-all-volume-test: 
STEP: delete the pod
Apr 29 20:20:12.738: INFO: Waiting for pod projected-volume-0d9f1004-2259-4898-b437-80eaeae2aeab to disappear
Apr 29 20:20:12.755: INFO: Pod projected-volume-0d9f1004-2259-4898-b437-80eaeae2aeab no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:20:12.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1602" for this suite.
Apr 29 20:20:18.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:20:18.864: INFO: namespace projected-1602 deletion completed in 6.105648427s

• [SLOW TEST:10.302 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:20:18.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-mmb6
STEP: Creating a pod to test atomic-volume-subpath
Apr 29 20:20:18.971: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mmb6" in namespace "subpath-3976" to be "success or failure"
Apr 29 20:20:18.981: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.26235ms
Apr 29 20:20:20.986: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014686492s
Apr 29 20:20:22.990: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Running", Reason="", readiness=true. Elapsed: 4.019244149s
Apr 29 20:20:24.994: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Running", Reason="", readiness=true. Elapsed: 6.023448529s
Apr 29 20:20:26.999: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Running", Reason="", readiness=true. Elapsed: 8.027870384s
Apr 29 20:20:29.003: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Running", Reason="", readiness=true. Elapsed: 10.032143625s
Apr 29 20:20:31.007: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Running", Reason="", readiness=true. Elapsed: 12.036472s
Apr 29 20:20:33.012: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Running", Reason="", readiness=true. Elapsed: 14.04103908s
Apr 29 20:20:35.015: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Running", Reason="", readiness=true. Elapsed: 16.044256761s
Apr 29 20:20:37.019: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Running", Reason="", readiness=true. Elapsed: 18.048325751s
Apr 29 20:20:39.023: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Running", Reason="", readiness=true. Elapsed: 20.052309609s
Apr 29 20:20:41.027: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Running", Reason="", readiness=true. Elapsed: 22.056180732s
Apr 29 20:20:43.031: INFO: Pod "pod-subpath-test-configmap-mmb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.060054867s
STEP: Saw pod success
Apr 29 20:20:43.031: INFO: Pod "pod-subpath-test-configmap-mmb6" satisfied condition "success or failure"
Apr 29 20:20:43.033: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-mmb6 container test-container-subpath-configmap-mmb6: 
STEP: delete the pod
Apr 29 20:20:43.076: INFO: Waiting for pod pod-subpath-test-configmap-mmb6 to disappear
Apr 29 20:20:43.083: INFO: Pod pod-subpath-test-configmap-mmb6 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-mmb6
Apr 29 20:20:43.083: INFO: Deleting pod "pod-subpath-test-configmap-mmb6" in namespace "subpath-3976"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:20:43.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3976" for this suite.
Apr 29 20:20:49.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:20:49.260: INFO: namespace subpath-3976 deletion completed in 6.162254646s

• [SLOW TEST:30.396 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:20:49.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Apr 29 20:20:49.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1741'
Apr 29 20:20:51.977: INFO: stderr: ""
Apr 29 20:20:51.977: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 29 20:20:51.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1741'
Apr 29 20:20:52.102: INFO: stderr: ""
Apr 29 20:20:52.102: INFO: stdout: "update-demo-nautilus-gbrzd update-demo-nautilus-kptjz "
Apr 29 20:20:52.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gbrzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1741'
Apr 29 20:20:52.202: INFO: stderr: ""
Apr 29 20:20:52.203: INFO: stdout: ""
Apr 29 20:20:52.203: INFO: update-demo-nautilus-gbrzd is created but not running
Apr 29 20:20:57.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1741'
Apr 29 20:20:57.320: INFO: stderr: ""
Apr 29 20:20:57.320: INFO: stdout: "update-demo-nautilus-gbrzd update-demo-nautilus-kptjz "
Apr 29 20:20:57.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gbrzd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1741'
Apr 29 20:20:57.408: INFO: stderr: ""
Apr 29 20:20:57.408: INFO: stdout: "true"
Apr 29 20:20:57.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gbrzd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1741'
Apr 29 20:20:57.503: INFO: stderr: ""
Apr 29 20:20:57.503: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 20:20:57.503: INFO: validating pod update-demo-nautilus-gbrzd
Apr 29 20:20:57.506: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 20:20:57.506: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 20:20:57.506: INFO: update-demo-nautilus-gbrzd is verified up and running
Apr 29 20:20:57.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kptjz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1741'
Apr 29 20:20:57.592: INFO: stderr: ""
Apr 29 20:20:57.592: INFO: stdout: "true"
Apr 29 20:20:57.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kptjz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1741'
Apr 29 20:20:57.679: INFO: stderr: ""
Apr 29 20:20:57.679: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 20:20:57.679: INFO: validating pod update-demo-nautilus-kptjz
Apr 29 20:20:57.682: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 20:20:57.682: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 20:20:57.682: INFO: update-demo-nautilus-kptjz is verified up and running
STEP: using delete to clean up resources
Apr 29 20:20:57.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1741'
Apr 29 20:20:57.770: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 20:20:57.770: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Apr 29 20:20:57.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1741'
Apr 29 20:20:57.875: INFO: stderr: "No resources found.\n"
Apr 29 20:20:57.875: INFO: stdout: ""
Apr 29 20:20:57.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1741 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 29 20:20:57.962: INFO: stderr: ""
Apr 29 20:20:57.962: INFO: stdout: "update-demo-nautilus-gbrzd\nupdate-demo-nautilus-kptjz\n"
Apr 29 20:20:58.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1741'
Apr 29 20:20:58.850: INFO: stderr: "No resources found.\n"
Apr 29 20:20:58.850: INFO: stdout: ""
Apr 29 20:20:58.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1741 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 29 20:20:58.942: INFO: stderr: ""
Apr 29 20:20:58.942: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:20:58.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1741" for this suite.
Apr 29 20:21:20.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:21:21.078: INFO: namespace kubectl-1741 deletion completed in 22.132162367s

• [SLOW TEST:31.817 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:21:21.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5353.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5353.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5353.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5353.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5353.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5353.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Apr 29 20:21:27.200: INFO: DNS probes using dns-5353/dns-test-65af54b8-9f0c-4201-8571-004f68810ee1 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:21:27.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5353" for this suite.
Apr 29 20:21:33.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:21:33.375: INFO: namespace dns-5353 deletion completed in 6.108470076s

• [SLOW TEST:12.296 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:21:33.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:22:33.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4367" for this suite.
Apr 29 20:22:55.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:22:55.586: INFO: namespace container-probe-4367 deletion completed in 22.091886531s

• [SLOW TEST:82.211 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:22:55.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Apr 29 20:22:55.664: INFO: Waiting up to 5m0s for pod "pod-d08cf849-5c43-4e9e-8261-85ca7e6f81d8" in namespace "emptydir-1101" to be "success or failure"
Apr 29 20:22:55.668: INFO: Pod "pod-d08cf849-5c43-4e9e-8261-85ca7e6f81d8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.700816ms
Apr 29 20:22:57.720: INFO: Pod "pod-d08cf849-5c43-4e9e-8261-85ca7e6f81d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055549429s
Apr 29 20:22:59.723: INFO: Pod "pod-d08cf849-5c43-4e9e-8261-85ca7e6f81d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059191739s
STEP: Saw pod success
Apr 29 20:22:59.723: INFO: Pod "pod-d08cf849-5c43-4e9e-8261-85ca7e6f81d8" satisfied condition "success or failure"
Apr 29 20:22:59.726: INFO: Trying to get logs from node iruya-worker2 pod pod-d08cf849-5c43-4e9e-8261-85ca7e6f81d8 container test-container: 
STEP: delete the pod
Apr 29 20:22:59.744: INFO: Waiting for pod pod-d08cf849-5c43-4e9e-8261-85ca7e6f81d8 to disappear
Apr 29 20:22:59.749: INFO: Pod pod-d08cf849-5c43-4e9e-8261-85ca7e6f81d8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:22:59.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1101" for this suite.
Apr 29 20:23:05.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:23:05.901: INFO: namespace emptydir-1101 deletion completed in 6.149548908s

• [SLOW TEST:10.315 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:23:05.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-43e83ad5-3c60-455b-b66a-829bc2679310
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-43e83ad5-3c60-455b-b66a-829bc2679310
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:23:14.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8559" for this suite.
Apr 29 20:23:36.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:23:36.168: INFO: namespace projected-8559 deletion completed in 22.125878054s

• [SLOW TEST:30.266 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:23:36.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:23:40.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6344" for this suite.
Apr 29 20:24:24.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:24:24.354: INFO: namespace kubelet-test-6344 deletion completed in 44.105626843s

• [SLOW TEST:48.185 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:24:24.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:24:30.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6083" for this suite.
Apr 29 20:25:20.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:25:20.735: INFO: namespace kubelet-test-6083 deletion completed in 50.128211227s

• [SLOW TEST:56.380 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:25:20.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Apr 29 20:25:25.840: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:25:26.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9623" for this suite.
Apr 29 20:25:48.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:25:48.990: INFO: namespace replicaset-9623 deletion completed in 22.125803639s

• [SLOW TEST:28.255 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:25:48.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Apr 29 20:25:49.094: INFO: Waiting up to 5m0s for pod "downward-api-15e82a7a-671b-4c50-be03-6ae4c595e604" in namespace "downward-api-4310" to be "success or failure"
Apr 29 20:25:49.109: INFO: Pod "downward-api-15e82a7a-671b-4c50-be03-6ae4c595e604": Phase="Pending", Reason="", readiness=false. Elapsed: 14.975485ms
Apr 29 20:25:51.113: INFO: Pod "downward-api-15e82a7a-671b-4c50-be03-6ae4c595e604": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019459694s
Apr 29 20:25:53.117: INFO: Pod "downward-api-15e82a7a-671b-4c50-be03-6ae4c595e604": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023422427s
STEP: Saw pod success
Apr 29 20:25:53.117: INFO: Pod "downward-api-15e82a7a-671b-4c50-be03-6ae4c595e604" satisfied condition "success or failure"
Apr 29 20:25:53.121: INFO: Trying to get logs from node iruya-worker2 pod downward-api-15e82a7a-671b-4c50-be03-6ae4c595e604 container dapi-container: 
STEP: delete the pod
Apr 29 20:25:53.291: INFO: Waiting for pod downward-api-15e82a7a-671b-4c50-be03-6ae4c595e604 to disappear
Apr 29 20:25:53.301: INFO: Pod downward-api-15e82a7a-671b-4c50-be03-6ae4c595e604 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:25:53.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4310" for this suite.
Apr 29 20:25:59.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:25:59.421: INFO: namespace downward-api-4310 deletion completed in 6.116421146s

• [SLOW TEST:10.431 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:25:59.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1241
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Apr 29 20:25:59.493: INFO: Found 0 stateful pods, waiting for 3
Apr 29 20:26:09.498: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 20:26:09.499: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 20:26:09.499: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Apr 29 20:26:19.499: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 20:26:19.499: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 20:26:19.499: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 20:26:19.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1241 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 29 20:26:19.784: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 29 20:26:19.784: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 29 20:26:19.784: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Apr 29 20:26:29.819: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Apr 29 20:26:39.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1241 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 29 20:26:40.121: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Apr 29 20:26:40.121: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 29 20:26:40.121: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 29 20:26:50.139: INFO: Waiting for StatefulSet statefulset-1241/ss2 to complete update
Apr 29 20:26:50.139: INFO: Waiting for Pod statefulset-1241/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 29 20:26:50.139: INFO: Waiting for Pod statefulset-1241/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 29 20:27:00.146: INFO: Waiting for StatefulSet statefulset-1241/ss2 to complete update
Apr 29 20:27:00.146: INFO: Waiting for Pod statefulset-1241/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Apr 29 20:27:10.145: INFO: Waiting for StatefulSet statefulset-1241/ss2 to complete update
STEP: Rolling back to a previous revision
Apr 29 20:27:20.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1241 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 29 20:27:20.470: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 29 20:27:20.470: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 29 20:27:20.470: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 29 20:27:30.500: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Apr 29 20:27:40.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1241 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 29 20:27:40.760: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Apr 29 20:27:40.760: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 29 20:27:40.760: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Apr 29 20:28:10.781: INFO: Deleting all statefulset in ns statefulset-1241
Apr 29 20:28:10.783: INFO: Scaling statefulset ss2 to 0
Apr 29 20:28:30.798: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 20:28:30.802: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:28:30.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1241" for this suite.
Apr 29 20:28:36.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:28:36.912: INFO: namespace statefulset-1241 deletion completed in 6.093729801s

• [SLOW TEST:157.491 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:28:36.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 29 20:28:37.013: INFO: Waiting up to 5m0s for pod "downwardapi-volume-066c39de-4075-499c-9b17-f998aa886680" in namespace "projected-5209" to be "success or failure"
Apr 29 20:28:37.017: INFO: Pod "downwardapi-volume-066c39de-4075-499c-9b17-f998aa886680": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060028ms
Apr 29 20:28:39.020: INFO: Pod "downwardapi-volume-066c39de-4075-499c-9b17-f998aa886680": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007552676s
Apr 29 20:28:41.024: INFO: Pod "downwardapi-volume-066c39de-4075-499c-9b17-f998aa886680": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011403357s
STEP: Saw pod success
Apr 29 20:28:41.024: INFO: Pod "downwardapi-volume-066c39de-4075-499c-9b17-f998aa886680" satisfied condition "success or failure"
Apr 29 20:28:41.027: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-066c39de-4075-499c-9b17-f998aa886680 container client-container: 
STEP: delete the pod
Apr 29 20:28:41.051: INFO: Waiting for pod downwardapi-volume-066c39de-4075-499c-9b17-f998aa886680 to disappear
Apr 29 20:28:41.083: INFO: Pod downwardapi-volume-066c39de-4075-499c-9b17-f998aa886680 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:28:41.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5209" for this suite.
Apr 29 20:28:47.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:28:47.198: INFO: namespace projected-5209 deletion completed in 6.110403217s

• [SLOW TEST:10.285 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:28:47.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Apr 29 20:28:47.312: INFO: Waiting up to 5m0s for pod "downward-api-e5b788c4-a0d9-45b9-8c5b-d2ed7d7a1ac8" in namespace "downward-api-1522" to be "success or failure"
Apr 29 20:28:47.315: INFO: Pod "downward-api-e5b788c4-a0d9-45b9-8c5b-d2ed7d7a1ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.840077ms
Apr 29 20:28:49.319: INFO: Pod "downward-api-e5b788c4-a0d9-45b9-8c5b-d2ed7d7a1ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006905218s
Apr 29 20:28:51.323: INFO: Pod "downward-api-e5b788c4-a0d9-45b9-8c5b-d2ed7d7a1ac8": Phase="Running", Reason="", readiness=true. Elapsed: 4.01110423s
Apr 29 20:28:53.328: INFO: Pod "downward-api-e5b788c4-a0d9-45b9-8c5b-d2ed7d7a1ac8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015743009s
STEP: Saw pod success
Apr 29 20:28:53.328: INFO: Pod "downward-api-e5b788c4-a0d9-45b9-8c5b-d2ed7d7a1ac8" satisfied condition "success or failure"
Apr 29 20:28:53.331: INFO: Trying to get logs from node iruya-worker pod downward-api-e5b788c4-a0d9-45b9-8c5b-d2ed7d7a1ac8 container dapi-container: 
STEP: delete the pod
Apr 29 20:28:53.368: INFO: Waiting for pod downward-api-e5b788c4-a0d9-45b9-8c5b-d2ed7d7a1ac8 to disappear
Apr 29 20:28:53.381: INFO: Pod downward-api-e5b788c4-a0d9-45b9-8c5b-d2ed7d7a1ac8 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:28:53.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1522" for this suite.
Apr 29 20:28:59.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:28:59.494: INFO: namespace downward-api-1522 deletion completed in 6.10859377s

• [SLOW TEST:12.297 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:28:59.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Apr 29 20:28:59.534: INFO: Waiting up to 5m0s for pod "pod-f2fb0880-874c-4722-99b3-977e555f72e2" in namespace "emptydir-6438" to be "success or failure"
Apr 29 20:28:59.550: INFO: Pod "pod-f2fb0880-874c-4722-99b3-977e555f72e2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.312462ms
Apr 29 20:29:01.554: INFO: Pod "pod-f2fb0880-874c-4722-99b3-977e555f72e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020366102s
Apr 29 20:29:03.558: INFO: Pod "pod-f2fb0880-874c-4722-99b3-977e555f72e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024076429s
STEP: Saw pod success
Apr 29 20:29:03.558: INFO: Pod "pod-f2fb0880-874c-4722-99b3-977e555f72e2" satisfied condition "success or failure"
Apr 29 20:29:03.561: INFO: Trying to get logs from node iruya-worker2 pod pod-f2fb0880-874c-4722-99b3-977e555f72e2 container test-container: 
STEP: delete the pod
Apr 29 20:29:03.777: INFO: Waiting for pod pod-f2fb0880-874c-4722-99b3-977e555f72e2 to disappear
Apr 29 20:29:03.813: INFO: Pod pod-f2fb0880-874c-4722-99b3-977e555f72e2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:29:03.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6438" for this suite.
Apr 29 20:29:09.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:29:09.986: INFO: namespace emptydir-6438 deletion completed in 6.168345646s

• [SLOW TEST:10.491 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:29:09.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0429 20:29:21.581566       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Apr 29 20:29:21.581: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:29:21.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6099" for this suite.
Apr 29 20:29:27.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:29:27.942: INFO: namespace gc-6099 deletion completed in 6.357723773s

• [SLOW TEST:17.955 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:29:27.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Apr 29 20:29:38.050: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3868 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:29:38.050: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:29:38.196: INFO: Exec stderr: ""
Apr 29 20:29:38.196: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3868 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:29:38.196: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:29:38.323: INFO: Exec stderr: ""
Apr 29 20:29:38.323: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3868 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:29:38.323: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:29:38.456: INFO: Exec stderr: ""
Apr 29 20:29:38.457: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3868 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:29:38.457: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:29:38.567: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Apr 29 20:29:38.567: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3868 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:29:38.567: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:29:38.696: INFO: Exec stderr: ""
Apr 29 20:29:38.696: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3868 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:29:38.696: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:29:38.827: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Apr 29 20:29:38.827: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3868 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:29:38.827: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:29:38.965: INFO: Exec stderr: ""
Apr 29 20:29:38.965: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3868 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:29:38.965: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:29:39.095: INFO: Exec stderr: ""
Apr 29 20:29:39.095: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3868 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:29:39.095: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:29:39.213: INFO: Exec stderr: ""
Apr 29 20:29:39.213: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3868 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:29:39.213: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:29:39.347: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:29:39.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-3868" for this suite.
Apr 29 20:30:31.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:30:31.475: INFO: namespace e2e-kubelet-etc-hosts-3868 deletion completed in 52.123032668s

• [SLOW TEST:63.533 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:30:31.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 29 20:30:31.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2756933e-c037-4c4e-aec7-0fdde3312ece" in namespace "projected-3167" to be "success or failure"
Apr 29 20:30:31.561: INFO: Pod "downwardapi-volume-2756933e-c037-4c4e-aec7-0fdde3312ece": Phase="Pending", Reason="", readiness=false. Elapsed: 35.602249ms
Apr 29 20:30:33.565: INFO: Pod "downwardapi-volume-2756933e-c037-4c4e-aec7-0fdde3312ece": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039816346s
Apr 29 20:30:35.569: INFO: Pod "downwardapi-volume-2756933e-c037-4c4e-aec7-0fdde3312ece": Phase="Running", Reason="", readiness=true. Elapsed: 4.043450478s
Apr 29 20:30:37.590: INFO: Pod "downwardapi-volume-2756933e-c037-4c4e-aec7-0fdde3312ece": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064482252s
STEP: Saw pod success
Apr 29 20:30:37.590: INFO: Pod "downwardapi-volume-2756933e-c037-4c4e-aec7-0fdde3312ece" satisfied condition "success or failure"
Apr 29 20:30:37.593: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2756933e-c037-4c4e-aec7-0fdde3312ece container client-container: 
STEP: delete the pod
Apr 29 20:30:37.613: INFO: Waiting for pod downwardapi-volume-2756933e-c037-4c4e-aec7-0fdde3312ece to disappear
Apr 29 20:30:37.617: INFO: Pod downwardapi-volume-2756933e-c037-4c4e-aec7-0fdde3312ece no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:30:37.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3167" for this suite.
Apr 29 20:30:43.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:30:43.719: INFO: namespace projected-3167 deletion completed in 6.097932554s

• [SLOW TEST:12.243 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:30:43.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 29 20:30:43.751: INFO: Creating deployment "nginx-deployment"
Apr 29 20:30:43.772: INFO: Waiting for observed generation 1
Apr 29 20:30:45.882: INFO: Waiting for all required pods to come up
Apr 29 20:30:45.889: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Apr 29 20:30:53.897: INFO: Waiting for deployment "nginx-deployment" to complete
Apr 29 20:30:53.903: INFO: Updating deployment "nginx-deployment" with a non-existent image
Apr 29 20:30:53.908: INFO: Updating deployment nginx-deployment
Apr 29 20:30:53.908: INFO: Waiting for observed generation 2
Apr 29 20:30:55.914: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Apr 29 20:30:55.916: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Apr 29 20:30:55.918: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Apr 29 20:30:55.924: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Apr 29 20:30:55.924: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Apr 29 20:30:56.095: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Apr 29 20:30:56.100: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Apr 29 20:30:56.100: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Apr 29 20:30:56.107: INFO: Updating deployment nginx-deployment
Apr 29 20:30:56.107: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Apr 29 20:30:56.170: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Apr 29 20:30:56.477: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Apr 29 20:30:56.495: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8191,SelfLink:/apis/apps/v1/namespaces/deployment-8191/deployments/nginx-deployment,UID:776dd7e8-7143-4247-abf7-f9772bdd421c,ResourceVersion:2898411,Generation:3,CreationTimestamp:2021-04-29 20:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2021-04-29 20:30:54 +0000 UTC 2021-04-29 20:30:43 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2021-04-29 20:30:56 +0000 UTC 2021-04-29 20:30:56 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Apr 29 20:30:56.821: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8191,SelfLink:/apis/apps/v1/namespaces/deployment-8191/replicasets/nginx-deployment-55fb7cb77f,UID:d6195cfc-c547-442f-979f-d3d32504cf3d,ResourceVersion:2898391,Generation:3,CreationTimestamp:2021-04-29 20:30:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 776dd7e8-7143-4247-abf7-f9772bdd421c 0xc002786277 0xc002786278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Apr 29 20:30:56.821: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Apr 29 20:30:56.821: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8191,SelfLink:/apis/apps/v1/namespaces/deployment-8191/replicasets/nginx-deployment-7b8c6f4498,UID:0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569,ResourceVersion:2898390,Generation:3,CreationTimestamp:2021-04-29 20:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 776dd7e8-7143-4247-abf7-f9772bdd421c 0xc002786357 0xc002786358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Apr 29 20:30:57.164: INFO: Pod "nginx-deployment-55fb7cb77f-47j6j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-47j6j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-47j6j,UID:05e12041-c956-41a0-9f29-0f307f1d16f8,ResourceVersion:2898401,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc002786ce7 0xc002786ce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002786d60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002786d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.164: INFO: Pod "nginx-deployment-55fb7cb77f-7zcwl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7zcwl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-7zcwl,UID:d7098789-f7df-4286-9109-30873b57b777,ResourceVersion:2898380,Generation:0,CreationTimestamp:2021-04-29 20:30:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc002786e07 0xc002786e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002786e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002786ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-04-29 20:30:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.164: INFO: Pod "nginx-deployment-55fb7cb77f-9kq9g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9kq9g,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-9kq9g,UID:eeeee975-faa7-41e2-9500-521aae287c7a,ResourceVersion:2898444,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc002786f70 0xc002786f71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002786ff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002787010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.165: INFO: Pod "nginx-deployment-55fb7cb77f-b7vfq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b7vfq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-b7vfq,UID:95efefe6-696e-4dbf-a13d-cee634b4f11f,ResourceVersion:2898416,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc002787097 0xc002787098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002787110} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002787130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.165: INFO: Pod "nginx-deployment-55fb7cb77f-mhtj7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mhtj7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-mhtj7,UID:ac50f1d7-c4e3-4869-8b22-af9cd9e82c08,ResourceVersion:2898440,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc0027871b7 0xc0027871b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002787230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002787250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.165: INFO: Pod "nginx-deployment-55fb7cb77f-phvtl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-phvtl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-phvtl,UID:af27ee74-ee67-47f3-a5bb-e7f817e6bb3f,ResourceVersion:2898352,Generation:0,CreationTimestamp:2021-04-29 20:30:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc0027872d7 0xc0027872d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002787350} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002787370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-04-29 20:30:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.166: INFO: Pod "nginx-deployment-55fb7cb77f-qpdtm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qpdtm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-qpdtm,UID:7b3d851f-519c-4601-a872-a57e8d6dda3d,ResourceVersion:2898364,Generation:0,CreationTimestamp:2021-04-29 20:30:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc002787440 0xc002787441}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027874c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027874e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2021-04-29 20:30:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.166: INFO: Pod "nginx-deployment-55fb7cb77f-svzdb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-svzdb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-svzdb,UID:a01f0ee8-0564-4e96-9d1c-d45be405a3ee,ResourceVersion:2898430,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc0027875b0 0xc0027875b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002787630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002787650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.166: INFO: Pod "nginx-deployment-55fb7cb77f-v9xxh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v9xxh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-v9xxh,UID:2f9129df-41be-4170-b824-acf163a7f523,ResourceVersion:2898434,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc0027876e7 0xc0027876e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002787760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002787780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.166: INFO: Pod "nginx-deployment-55fb7cb77f-xbwm2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xbwm2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-xbwm2,UID:76f6922d-cf29-4ab7-93e0-0bef8127cf07,ResourceVersion:2898356,Generation:0,CreationTimestamp:2021-04-29 20:30:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc002787817 0xc002787818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027878a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027878c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-04-29 20:30:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.166: INFO: Pod "nginx-deployment-55fb7cb77f-xgrkq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xgrkq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-xgrkq,UID:7a4b25f9-baa4-41cc-b9c3-297c67904dac,ResourceVersion:2898414,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc0027879c0 0xc0027879c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002787a50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002787a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.167: INFO: Pod "nginx-deployment-55fb7cb77f-zpnsz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zpnsz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-zpnsz,UID:cf916e25-438b-4986-a756-f9baf476dabc,ResourceVersion:2898378,Generation:0,CreationTimestamp:2021-04-29 20:30:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc002787b07 0xc002787b08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002787b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002787ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:54 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-04-29 20:30:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.167: INFO: Pod "nginx-deployment-55fb7cb77f-zxq8m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zxq8m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-55fb7cb77f-zxq8m,UID:6fcc6e90-b527-4c4a-b805-04a6ac9a1e87,ResourceVersion:2898433,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f d6195cfc-c547-442f-979f-d3d32504cf3d 0xc002787c70 0xc002787c71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002787cf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002787d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.167: INFO: Pod "nginx-deployment-7b8c6f4498-4nqg5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4nqg5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-4nqg5,UID:c88b9f55-3e75-4626-abaa-7fa0c715a269,ResourceVersion:2898417,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc002787df7 0xc002787df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002787ed0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002787ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.167: INFO: Pod "nginx-deployment-7b8c6f4498-8nrmc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8nrmc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-8nrmc,UID:34ea83c3-de3b-4083-b500-6a0f30620a54,ResourceVersion:2898409,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc002787f77 0xc002787f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002787ff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b66460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.167: INFO: Pod "nginx-deployment-7b8c6f4498-95h8l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-95h8l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-95h8l,UID:40803fd6-2a19-41de-8a58-2f3800e6d894,ResourceVersion:2898410,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc002b664e7 0xc002b664e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b66560} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b66580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.167: INFO: Pod "nginx-deployment-7b8c6f4498-d7lln" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-d7lln,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-d7lln,UID:7f6a616c-cc1f-4c7a-a24d-d49e5a835b9c,ResourceVersion:2898308,Generation:0,CreationTimestamp:2021-04-29 20:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc002b667e7 0xc002b667e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b66860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b66880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:43 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.70,StartTime:2021-04-29 20:30:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-29 20:30:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4f112dfe126c5ce3e2066bca0528c4e6c48cea197ea5ab5d8d7e7f71c53bb91b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.167: INFO: Pod "nginx-deployment-7b8c6f4498-dsmwr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dsmwr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-dsmwr,UID:58b89066-3387-4517-b29c-a576e9db8e69,ResourceVersion:2898435,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc002b66957 0xc002b66958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b669d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b669f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.167: INFO: Pod "nginx-deployment-7b8c6f4498-f4dth" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f4dth,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-f4dth,UID:efbe168a-e7b5-4f7d-a0a3-49ac5b7ff79a,ResourceVersion:2898310,Generation:0,CreationTimestamp:2021-04-29 20:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc002b66a77 0xc002b66a78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b66af0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b66b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:44 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.163,StartTime:2021-04-29 20:30:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-29 20:30:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a3f3b80912a633c1f0990d286686901fd8986f630ea6a926c9b723cad858acfa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.168: INFO: Pod "nginx-deployment-7b8c6f4498-g97vw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g97vw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-g97vw,UID:4bf5c369-7952-4487-8383-aee3a2029fa2,ResourceVersion:2898318,Generation:0,CreationTimestamp:2021-04-29 20:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc002b66be7 0xc002b66be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b67c80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b67ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:44 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.164,StartTime:2021-04-29 20:30:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-29 20:30:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://edbf3282e821ea7971c797e5ec3a81f6e614e3226a8211e83e98ae247f25445d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.168: INFO: Pod "nginx-deployment-7b8c6f4498-kjtb2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kjtb2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-kjtb2,UID:2ed3d52f-801c-49c2-99ee-12590ac85b0b,ResourceVersion:2898420,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc002b67d77 0xc002b67d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b67e00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b67e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.168: INFO: Pod "nginx-deployment-7b8c6f4498-nqqpk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nqqpk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-nqqpk,UID:0a2ce584-e7b1-4251-bf2e-779cb7b967ec,ResourceVersion:2898289,Generation:0,CreationTimestamp:2021-04-29 20:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc002b67ea7 0xc002b67ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b67f20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b67f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:43 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.68,StartTime:2021-04-29 20:30:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-29 20:30:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8e6d06eda8e2049aac750496d5e4cdd31fce249df01e9d976fbfbae0c0687d87}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.168: INFO: Pod "nginx-deployment-7b8c6f4498-qgxvr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qgxvr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-qgxvr,UID:ee1881b5-926c-450f-8e76-cbeebf76af74,ResourceVersion:2898266,Generation:0,CreationTimestamp:2021-04-29 20:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc003a54017 0xc003a54018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003a54090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003a540b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:43 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.160,StartTime:2021-04-29 20:30:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-29 20:30:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bc29677dea1960b7250f7c281b7445365b685330409bf29696a8ef799de9ed99}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.168: INFO: Pod "nginx-deployment-7b8c6f4498-qwjcw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qwjcw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-qwjcw,UID:d2700904-87bf-454f-9abe-62456c8626c0,ResourceVersion:2898436,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc003a54187 0xc003a54188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003a54200} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003a54220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.168: INFO: Pod "nginx-deployment-7b8c6f4498-rhk9d" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rhk9d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-rhk9d,UID:61ed9b06-1461-40cc-ae7e-82e0dda3dce4,ResourceVersion:2898305,Generation:0,CreationTimestamp:2021-04-29 20:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc003a542a7 0xc003a542a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003a54320} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003a54340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:43 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.162,StartTime:2021-04-29 20:30:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-29 20:30:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0c1b2a77cfa779542a44962cb1b909193792ec340809e0728cee1d3615cc622a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.169: INFO: Pod "nginx-deployment-7b8c6f4498-s66hq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s66hq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-s66hq,UID:47727866-844d-4697-acae-803a8eb07f99,ResourceVersion:2898437,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc003a54417 0xc003a54418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003a54490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003a544b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.169: INFO: Pod "nginx-deployment-7b8c6f4498-spzmx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-spzmx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-spzmx,UID:3c15e2da-39e1-4a81-ac2f-305fbd5de288,ResourceVersion:2898441,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc003a54537 0xc003a54538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003a545b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003a545d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.169: INFO: Pod "nginx-deployment-7b8c6f4498-vcsk6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vcsk6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-vcsk6,UID:dc303d23-8454-4cc1-9158-b11e9037f861,ResourceVersion:2898442,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc003a54657 0xc003a54658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003a546d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003a546f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.169: INFO: Pod "nginx-deployment-7b8c6f4498-vpqhd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vpqhd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-vpqhd,UID:16744c6a-a80c-4982-babd-1828f1786870,ResourceVersion:2898315,Generation:0,CreationTimestamp:2021-04-29 20:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc003a54777 0xc003a54778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003a547f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003a54810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:43 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.71,StartTime:2021-04-29 20:30:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-29 20:30:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://38a52960cec987458b5ceddbaf8663a40de82ee144b3c699c70f66c227ac5a29}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.169: INFO: Pod "nginx-deployment-7b8c6f4498-vv5rc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vv5rc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-vv5rc,UID:d259aebe-4846-4dbb-b1fb-f0af8cfdb9b2,ResourceVersion:2898415,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc003a548e7 0xc003a548e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003a54960} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003a54980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.169: INFO: Pod "nginx-deployment-7b8c6f4498-w22lq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w22lq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-w22lq,UID:9a6d4e24-a439-45b0-bf3b-5d8509d9ffa2,ResourceVersion:2898446,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc003a54a07 0xc003a54a08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003a54a80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003a54aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-04-29 20:30:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.170: INFO: Pod "nginx-deployment-7b8c6f4498-xtkkm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xtkkm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-xtkkm,UID:eb2c28c6-371a-4e9e-9f61-ed0ef7b41d02,ResourceVersion:2898418,Generation:0,CreationTimestamp:2021-04-29 20:30:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc003a54b67 0xc003a54b68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003a54be0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003a54c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Apr 29 20:30:57.170: INFO: Pod "nginx-deployment-7b8c6f4498-zqd5z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zqd5z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8191,SelfLink:/api/v1/namespaces/deployment-8191/pods/nginx-deployment-7b8c6f4498-zqd5z,UID:a573a7ce-9776-4a88-8e99-576cd6e21a4f,ResourceVersion:2898313,Generation:0,CreationTimestamp:2021-04-29 20:30:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 0bb68f1c-f0bb-4c3a-a9b6-3eb44fbcd569 0xc003a54c87 0xc003a54c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2hzbz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2hzbz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2hzbz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003a54d00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003a54d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-04-29 20:30:43 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.161,StartTime:2021-04-29 20:30:44 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-04-29 20:30:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://51949a66069ad2b357a0f3948a8d7432f59e909a28eedb3067e9e4ac24be6f5e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:30:57.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8191" for this suite.
Apr 29 20:31:15.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:31:15.643: INFO: namespace deployment-8191 deletion completed in 18.271626026s

• [SLOW TEST:31.924 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:31:15.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Apr 29 20:31:22.641: INFO: 0 pods remaining
Apr 29 20:31:22.641: INFO: 0 pods has nil DeletionTimestamp
Apr 29 20:31:22.641: INFO: 
STEP: Gathering metrics
W0429 20:31:23.019800       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Apr 29 20:31:23.019: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:31:23.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-395" for this suite.
Apr 29 20:31:31.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:31:31.476: INFO: namespace gc-395 deletion completed in 8.261069349s

• [SLOW TEST:15.832 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:31:31.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Apr 29 20:31:31.767: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:31:37.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1518" for this suite.
Apr 29 20:31:43.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:31:44.025: INFO: namespace init-container-1518 deletion completed in 6.118881324s

• [SLOW TEST:12.549 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:31:44.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-d4bc89c8-fc86-438f-afbd-fa4683090216
STEP: Creating configMap with name cm-test-opt-upd-d568c894-9a96-4cd4-8c98-29f9950a29b0
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-d4bc89c8-fc86-438f-afbd-fa4683090216
STEP: Updating configmap cm-test-opt-upd-d568c894-9a96-4cd4-8c98-29f9950a29b0
STEP: Creating configMap with name cm-test-opt-create-df1ec8bf-729d-4bdd-868d-907eb56eb17b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:33:10.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3535" for this suite.
Apr 29 20:33:32.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:33:33.088: INFO: namespace projected-3535 deletion completed in 22.132791889s

• [SLOW TEST:109.062 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:33:33.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Apr 29 20:33:39.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-8e341364-78e3-44db-a3a2-0a850ca968c8 -c busybox-main-container --namespace=emptydir-8825 -- cat /usr/share/volumeshare/shareddata.txt'
Apr 29 20:33:42.131: INFO: stderr: ""
Apr 29 20:33:42.131: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:33:42.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8825" for this suite.
Apr 29 20:33:48.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:33:48.282: INFO: namespace emptydir-8825 deletion completed in 6.14584184s

• [SLOW TEST:15.194 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:33:48.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Apr 29 20:33:48.334: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix199157835/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:33:48.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1523" for this suite.
Apr 29 20:33:54.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:33:54.518: INFO: namespace kubectl-1523 deletion completed in 6.108563846s

• [SLOW TEST:6.236 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:33:54.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Apr 29 20:33:54.578: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:33:55.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-862" for this suite.
Apr 29 20:34:01.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:34:01.758: INFO: namespace custom-resource-definition-862 deletion completed in 6.103031361s

• [SLOW TEST:7.239 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:34:01.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1015
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-1015
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1015
Apr 29 20:34:01.849: INFO: Found 0 stateful pods, waiting for 1
Apr 29 20:34:11.854: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Apr 29 20:34:11.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1015 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 29 20:34:12.169: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 29 20:34:12.169: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 29 20:34:12.169: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 29 20:34:12.173: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Apr 29 20:34:22.178: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Apr 29 20:34:22.178: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 20:34:22.192: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999472s
Apr 29 20:34:23.196: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995392395s
Apr 29 20:34:24.201: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990747127s
Apr 29 20:34:25.205: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.985930766s
Apr 29 20:34:26.210: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.982187443s
Apr 29 20:34:27.214: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.977495167s
Apr 29 20:34:28.219: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.972819392s
Apr 29 20:34:29.223: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.968543455s
Apr 29 20:34:30.227: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.964336859s
Apr 29 20:34:31.231: INFO: Verifying statefulset ss doesn't scale past 1 for another 959.684077ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1015
Apr 29 20:34:32.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1015 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 29 20:34:32.501: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Apr 29 20:34:32.501: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 29 20:34:32.501: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 29 20:34:32.528: INFO: Found 1 stateful pods, waiting for 3
Apr 29 20:34:42.533: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 20:34:42.533: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Apr 29 20:34:42.533: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Apr 29 20:34:42.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1015 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 29 20:34:42.803: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 29 20:34:42.803: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 29 20:34:42.803: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 29 20:34:42.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1015 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 29 20:34:43.086: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 29 20:34:43.086: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 29 20:34:43.086: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 29 20:34:43.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1015 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Apr 29 20:34:43.339: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Apr 29 20:34:43.339: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Apr 29 20:34:43.339: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Apr 29 20:34:43.339: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 20:34:43.342: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Apr 29 20:34:53.350: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Apr 29 20:34:53.350: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Apr 29 20:34:53.350: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Apr 29 20:34:53.360: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999528s
Apr 29 20:34:54.366: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996152623s
Apr 29 20:34:55.371: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990452579s
Apr 29 20:34:56.377: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985308779s
Apr 29 20:34:57.382: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980015952s
Apr 29 20:34:58.387: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.974699987s
Apr 29 20:34:59.393: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.96923177s
Apr 29 20:35:00.398: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963993568s
Apr 29 20:35:01.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.95866994s
Apr 29 20:35:02.408: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.859985ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1015
Apr 29 20:35:03.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1015 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 29 20:35:03.661: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Apr 29 20:35:03.661: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 29 20:35:03.661: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 29 20:35:03.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1015 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 29 20:35:03.894: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Apr 29 20:35:03.894: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 29 20:35:03.894: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 29 20:35:03.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1015 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Apr 29 20:35:04.142: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Apr 29 20:35:04.142: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Apr 29 20:35:04.142: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Apr 29 20:35:04.142: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Apr 29 20:35:34.158: INFO: Deleting all statefulset in ns statefulset-1015
Apr 29 20:35:34.161: INFO: Scaling statefulset ss to 0
Apr 29 20:35:34.170: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 20:35:34.172: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:35:34.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1015" for this suite.
Apr 29 20:35:40.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:35:40.285: INFO: namespace statefulset-1015 deletion completed in 6.098512576s

• [SLOW TEST:98.528 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:35:40.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-5941
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5941 to expose endpoints map[]
Apr 29 20:35:40.415: INFO: Get endpoints failed (13.290184ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Apr 29 20:35:41.419: INFO: successfully validated that service multi-endpoint-test in namespace services-5941 exposes endpoints map[] (1.017081181s elapsed)
STEP: Creating pod pod1 in namespace services-5941
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5941 to expose endpoints map[pod1:[100]]
Apr 29 20:35:44.475: INFO: successfully validated that service multi-endpoint-test in namespace services-5941 exposes endpoints map[pod1:[100]] (3.048759166s elapsed)
STEP: Creating pod pod2 in namespace services-5941
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5941 to expose endpoints map[pod1:[100] pod2:[101]]
Apr 29 20:35:47.672: INFO: successfully validated that service multi-endpoint-test in namespace services-5941 exposes endpoints map[pod1:[100] pod2:[101]] (3.192804855s elapsed)
STEP: Deleting pod pod1 in namespace services-5941
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5941 to expose endpoints map[pod2:[101]]
Apr 29 20:35:47.710: INFO: successfully validated that service multi-endpoint-test in namespace services-5941 exposes endpoints map[pod2:[101]] (33.729139ms elapsed)
STEP: Deleting pod pod2 in namespace services-5941
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5941 to expose endpoints map[]
Apr 29 20:35:48.730: INFO: successfully validated that service multi-endpoint-test in namespace services-5941 exposes endpoints map[] (1.015367525s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:35:48.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5941" for this suite.
Apr 29 20:36:10.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:36:10.876: INFO: namespace services-5941 deletion completed in 22.103190951s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:30.591 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:36:10.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-ba91d8ad-a399-4e2e-b23e-707724da2371
STEP: Creating a pod to test consume secrets
Apr 29 20:36:10.955: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c9960424-a835-4855-8f6e-b26b38a7a901" in namespace "projected-4882" to be "success or failure"
Apr 29 20:36:10.958: INFO: Pod "pod-projected-secrets-c9960424-a835-4855-8f6e-b26b38a7a901": Phase="Pending", Reason="", readiness=false. Elapsed: 3.623717ms
Apr 29 20:36:12.963: INFO: Pod "pod-projected-secrets-c9960424-a835-4855-8f6e-b26b38a7a901": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007900161s
Apr 29 20:36:14.967: INFO: Pod "pod-projected-secrets-c9960424-a835-4855-8f6e-b26b38a7a901": Phase="Running", Reason="", readiness=true. Elapsed: 4.011931616s
Apr 29 20:36:16.971: INFO: Pod "pod-projected-secrets-c9960424-a835-4855-8f6e-b26b38a7a901": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016523552s
STEP: Saw pod success
Apr 29 20:36:16.971: INFO: Pod "pod-projected-secrets-c9960424-a835-4855-8f6e-b26b38a7a901" satisfied condition "success or failure"
Apr 29 20:36:16.974: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-c9960424-a835-4855-8f6e-b26b38a7a901 container projected-secret-volume-test: 
STEP: delete the pod
Apr 29 20:36:16.993: INFO: Waiting for pod pod-projected-secrets-c9960424-a835-4855-8f6e-b26b38a7a901 to disappear
Apr 29 20:36:16.997: INFO: Pod pod-projected-secrets-c9960424-a835-4855-8f6e-b26b38a7a901 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:36:16.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4882" for this suite.
Apr 29 20:36:23.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:36:23.136: INFO: namespace projected-4882 deletion completed in 6.135959017s

• [SLOW TEST:12.260 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:36:23.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 29 20:36:23.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd25e14b-2158-4664-8066-e690f250f6b4" in namespace "downward-api-4814" to be "success or failure"
Apr 29 20:36:23.278: INFO: Pod "downwardapi-volume-cd25e14b-2158-4664-8066-e690f250f6b4": Phase="Pending", Reason="", readiness=false. Elapsed: 63.742276ms
Apr 29 20:36:25.297: INFO: Pod "downwardapi-volume-cd25e14b-2158-4664-8066-e690f250f6b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082454451s
Apr 29 20:36:27.301: INFO: Pod "downwardapi-volume-cd25e14b-2158-4664-8066-e690f250f6b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08685149s
STEP: Saw pod success
Apr 29 20:36:27.301: INFO: Pod "downwardapi-volume-cd25e14b-2158-4664-8066-e690f250f6b4" satisfied condition "success or failure"
Apr 29 20:36:27.304: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cd25e14b-2158-4664-8066-e690f250f6b4 container client-container: 
STEP: delete the pod
Apr 29 20:36:27.358: INFO: Waiting for pod downwardapi-volume-cd25e14b-2158-4664-8066-e690f250f6b4 to disappear
Apr 29 20:36:27.374: INFO: Pod downwardapi-volume-cd25e14b-2158-4664-8066-e690f250f6b4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:36:27.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4814" for this suite.
Apr 29 20:36:33.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:36:33.476: INFO: namespace downward-api-4814 deletion completed in 6.097518626s

• [SLOW TEST:10.339 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:36:33.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Apr 29 20:36:41.629: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Apr 29 20:36:41.635: INFO: Pod pod-with-prestop-http-hook still exists
Apr 29 20:36:43.636: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Apr 29 20:36:43.639: INFO: Pod pod-with-prestop-http-hook still exists
Apr 29 20:36:45.636: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Apr 29 20:36:45.691: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:36:45.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4896" for this suite.
Apr 29 20:37:07.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:37:07.800: INFO: namespace container-lifecycle-hook-4896 deletion completed in 22.098214863s

• [SLOW TEST:34.324 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:37:07.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 29 20:37:07.881: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99be0e10-90ed-4d38-b70f-19f6af319e96" in namespace "projected-6098" to be "success or failure"
Apr 29 20:37:07.897: INFO: Pod "downwardapi-volume-99be0e10-90ed-4d38-b70f-19f6af319e96": Phase="Pending", Reason="", readiness=false. Elapsed: 15.81946ms
Apr 29 20:37:09.901: INFO: Pod "downwardapi-volume-99be0e10-90ed-4d38-b70f-19f6af319e96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020684883s
Apr 29 20:37:11.906: INFO: Pod "downwardapi-volume-99be0e10-90ed-4d38-b70f-19f6af319e96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025047332s
STEP: Saw pod success
Apr 29 20:37:11.906: INFO: Pod "downwardapi-volume-99be0e10-90ed-4d38-b70f-19f6af319e96" satisfied condition "success or failure"
Apr 29 20:37:11.909: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-99be0e10-90ed-4d38-b70f-19f6af319e96 container client-container: 
STEP: delete the pod
Apr 29 20:37:11.934: INFO: Waiting for pod downwardapi-volume-99be0e10-90ed-4d38-b70f-19f6af319e96 to disappear
Apr 29 20:37:11.938: INFO: Pod downwardapi-volume-99be0e10-90ed-4d38-b70f-19f6af319e96 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:37:11.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6098" for this suite.
Apr 29 20:37:18.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:37:18.094: INFO: namespace projected-6098 deletion completed in 6.152337217s

• [SLOW TEST:10.294 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:37:18.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Apr 29 20:37:18.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5531'
Apr 29 20:37:18.477: INFO: stderr: ""
Apr 29 20:37:18.477: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 29 20:37:18.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5531'
Apr 29 20:37:18.601: INFO: stderr: ""
Apr 29 20:37:18.601: INFO: stdout: "update-demo-nautilus-6fmkp update-demo-nautilus-xcwjm "
Apr 29 20:37:18.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6fmkp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5531'
Apr 29 20:37:18.715: INFO: stderr: ""
Apr 29 20:37:18.715: INFO: stdout: ""
Apr 29 20:37:18.715: INFO: update-demo-nautilus-6fmkp is created but not running
Apr 29 20:37:23.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5531'
Apr 29 20:37:23.823: INFO: stderr: ""
Apr 29 20:37:23.823: INFO: stdout: "update-demo-nautilus-6fmkp update-demo-nautilus-xcwjm "
Apr 29 20:37:23.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6fmkp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5531'
Apr 29 20:37:23.909: INFO: stderr: ""
Apr 29 20:37:23.909: INFO: stdout: "true"
Apr 29 20:37:23.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6fmkp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5531'
Apr 29 20:37:24.002: INFO: stderr: ""
Apr 29 20:37:24.002: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 20:37:24.002: INFO: validating pod update-demo-nautilus-6fmkp
Apr 29 20:37:24.006: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 20:37:24.006: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 20:37:24.006: INFO: update-demo-nautilus-6fmkp is verified up and running
Apr 29 20:37:24.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xcwjm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5531'
Apr 29 20:37:24.099: INFO: stderr: ""
Apr 29 20:37:24.099: INFO: stdout: "true"
Apr 29 20:37:24.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xcwjm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5531'
Apr 29 20:37:24.190: INFO: stderr: ""
Apr 29 20:37:24.190: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 20:37:24.190: INFO: validating pod update-demo-nautilus-xcwjm
Apr 29 20:37:24.193: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 20:37:24.193: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 20:37:24.193: INFO: update-demo-nautilus-xcwjm is verified up and running
STEP: scaling down the replication controller
Apr 29 20:37:24.195: INFO: scanned /root for discovery docs: 
Apr 29 20:37:24.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5531'
Apr 29 20:37:25.318: INFO: stderr: ""
Apr 29 20:37:25.318: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 29 20:37:25.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5531'
Apr 29 20:37:25.412: INFO: stderr: ""
Apr 29 20:37:25.412: INFO: stdout: "update-demo-nautilus-6fmkp update-demo-nautilus-xcwjm "
STEP: Replicas for name=update-demo: expected=1 actual=2
Apr 29 20:37:30.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5531'
Apr 29 20:37:30.506: INFO: stderr: ""
Apr 29 20:37:30.506: INFO: stdout: "update-demo-nautilus-6fmkp "
Apr 29 20:37:30.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6fmkp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5531'
Apr 29 20:37:30.606: INFO: stderr: ""
Apr 29 20:37:30.606: INFO: stdout: "true"
Apr 29 20:37:30.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6fmkp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5531'
Apr 29 20:37:30.711: INFO: stderr: ""
Apr 29 20:37:30.711: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 20:37:30.711: INFO: validating pod update-demo-nautilus-6fmkp
Apr 29 20:37:30.715: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 20:37:30.715: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 20:37:30.715: INFO: update-demo-nautilus-6fmkp is verified up and running
STEP: scaling up the replication controller
Apr 29 20:37:30.718: INFO: scanned /root for discovery docs: 
Apr 29 20:37:30.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5531'
Apr 29 20:37:31.846: INFO: stderr: ""
Apr 29 20:37:31.846: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Apr 29 20:37:31.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5531'
Apr 29 20:37:31.939: INFO: stderr: ""
Apr 29 20:37:31.939: INFO: stdout: "update-demo-nautilus-2s8lz update-demo-nautilus-6fmkp "
Apr 29 20:37:31.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2s8lz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5531'
Apr 29 20:37:32.030: INFO: stderr: ""
Apr 29 20:37:32.031: INFO: stdout: ""
Apr 29 20:37:32.031: INFO: update-demo-nautilus-2s8lz is created but not running
Apr 29 20:37:37.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5531'
Apr 29 20:37:37.130: INFO: stderr: ""
Apr 29 20:37:37.130: INFO: stdout: "update-demo-nautilus-2s8lz update-demo-nautilus-6fmkp "
Apr 29 20:37:37.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2s8lz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5531'
Apr 29 20:37:37.221: INFO: stderr: ""
Apr 29 20:37:37.221: INFO: stdout: "true"
Apr 29 20:37:37.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2s8lz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5531'
Apr 29 20:37:37.313: INFO: stderr: ""
Apr 29 20:37:37.314: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 20:37:37.314: INFO: validating pod update-demo-nautilus-2s8lz
Apr 29 20:37:37.317: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 20:37:37.317: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 20:37:37.317: INFO: update-demo-nautilus-2s8lz is verified up and running
Apr 29 20:37:37.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6fmkp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5531'
Apr 29 20:37:37.441: INFO: stderr: ""
Apr 29 20:37:37.441: INFO: stdout: "true"
Apr 29 20:37:37.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6fmkp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5531'
Apr 29 20:37:37.529: INFO: stderr: ""
Apr 29 20:37:37.529: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Apr 29 20:37:37.529: INFO: validating pod update-demo-nautilus-6fmkp
Apr 29 20:37:37.531: INFO: got data: {
  "image": "nautilus.jpg"
}

Apr 29 20:37:37.532: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Apr 29 20:37:37.532: INFO: update-demo-nautilus-6fmkp is verified up and running
STEP: using delete to clean up resources
Apr 29 20:37:37.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5531'
Apr 29 20:37:37.626: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Apr 29 20:37:37.626: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Apr 29 20:37:37.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5531'
Apr 29 20:37:37.721: INFO: stderr: "No resources found.\n"
Apr 29 20:37:37.721: INFO: stdout: ""
Apr 29 20:37:37.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5531 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 29 20:37:37.820: INFO: stderr: ""
Apr 29 20:37:37.820: INFO: stdout: "update-demo-nautilus-2s8lz\nupdate-demo-nautilus-6fmkp\n"
Apr 29 20:37:38.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5531'
Apr 29 20:37:38.425: INFO: stderr: "No resources found.\n"
Apr 29 20:37:38.425: INFO: stdout: ""
Apr 29 20:37:38.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5531 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Apr 29 20:37:38.520: INFO: stderr: ""
Apr 29 20:37:38.520: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:37:38.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5531" for this suite.
Apr 29 20:38:00.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:38:00.637: INFO: namespace kubectl-5531 deletion completed in 22.112835376s

• [SLOW TEST:42.543 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:38:00.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Apr 29 20:38:05.222: INFO: Successfully updated pod "labelsupdateeb6d3e6d-0bdd-407b-b725-eb99c40886d3"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:38:09.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1814" for this suite.
Apr 29 20:38:31.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:38:31.391: INFO: namespace downward-api-1814 deletion completed in 22.13576394s

• [SLOW TEST:30.753 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:38:31.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3508
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3508
STEP: Creating statefulset with conflicting port in namespace statefulset-3508
STEP: Waiting until pod test-pod will start running in namespace statefulset-3508
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3508
Apr 29 20:38:35.502: INFO: Observed stateful pod in namespace: statefulset-3508, name: ss-0, uid: d50d2801-d416-4537-8fb0-994b769ec7b3, status phase: Pending. Waiting for statefulset controller to delete.
Apr 29 20:38:35.876: INFO: Observed stateful pod in namespace: statefulset-3508, name: ss-0, uid: d50d2801-d416-4537-8fb0-994b769ec7b3, status phase: Failed. Waiting for statefulset controller to delete.
Apr 29 20:38:35.884: INFO: Observed stateful pod in namespace: statefulset-3508, name: ss-0, uid: d50d2801-d416-4537-8fb0-994b769ec7b3, status phase: Failed. Waiting for statefulset controller to delete.
Apr 29 20:38:35.939: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3508
STEP: Removing pod with conflicting port in namespace statefulset-3508
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3508 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Apr 29 20:38:40.028: INFO: Deleting all statefulset in ns statefulset-3508
Apr 29 20:38:40.032: INFO: Scaling statefulset ss to 0
Apr 29 20:38:50.049: INFO: Waiting for statefulset status.replicas updated to 0
Apr 29 20:38:50.051: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:38:50.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3508" for this suite.
Apr 29 20:38:56.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:38:56.167: INFO: namespace statefulset-3508 deletion completed in 6.098238242s

• [SLOW TEST:24.775 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:38:56.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-qn6s
STEP: Creating a pod to test atomic-volume-subpath
Apr 29 20:38:56.278: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-qn6s" in namespace "subpath-4580" to be "success or failure"
Apr 29 20:38:56.293: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Pending", Reason="", readiness=false. Elapsed: 15.475083ms
Apr 29 20:38:58.490: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211737228s
Apr 29 20:39:00.493: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Running", Reason="", readiness=true. Elapsed: 4.215364895s
Apr 29 20:39:02.497: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Running", Reason="", readiness=true. Elapsed: 6.218841203s
Apr 29 20:39:04.501: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Running", Reason="", readiness=true. Elapsed: 8.223320242s
Apr 29 20:39:06.505: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Running", Reason="", readiness=true. Elapsed: 10.227419736s
Apr 29 20:39:08.510: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Running", Reason="", readiness=true. Elapsed: 12.232040372s
Apr 29 20:39:10.514: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Running", Reason="", readiness=true. Elapsed: 14.236365398s
Apr 29 20:39:12.519: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Running", Reason="", readiness=true. Elapsed: 16.240841785s
Apr 29 20:39:14.523: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Running", Reason="", readiness=true. Elapsed: 18.244752659s
Apr 29 20:39:16.527: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Running", Reason="", readiness=true. Elapsed: 20.248989555s
Apr 29 20:39:18.531: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Running", Reason="", readiness=true. Elapsed: 22.253000539s
Apr 29 20:39:20.535: INFO: Pod "pod-subpath-test-secret-qn6s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.257348384s
STEP: Saw pod success
Apr 29 20:39:20.535: INFO: Pod "pod-subpath-test-secret-qn6s" satisfied condition "success or failure"
Apr 29 20:39:20.538: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-qn6s container test-container-subpath-secret-qn6s: 
STEP: delete the pod
Apr 29 20:39:20.574: INFO: Waiting for pod pod-subpath-test-secret-qn6s to disappear
Apr 29 20:39:20.590: INFO: Pod pod-subpath-test-secret-qn6s no longer exists
STEP: Deleting pod pod-subpath-test-secret-qn6s
Apr 29 20:39:20.590: INFO: Deleting pod "pod-subpath-test-secret-qn6s" in namespace "subpath-4580"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:39:20.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4580" for this suite.
Apr 29 20:39:26.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:39:26.723: INFO: namespace subpath-4580 deletion completed in 6.124989548s

• [SLOW TEST:30.555 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:39:26.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-12d6e388-0491-4c33-86cf-fc5b2408acce
STEP: Creating secret with name s-test-opt-upd-5581370d-e669-40a9-9207-9fc72311a565
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-12d6e388-0491-4c33-86cf-fc5b2408acce
STEP: Updating secret s-test-opt-upd-5581370d-e669-40a9-9207-9fc72311a565
STEP: Creating secret with name s-test-opt-create-43e0fc4e-a81c-49ad-ba50-9b239ba853bc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:39:34.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8146" for this suite.
Apr 29 20:39:56.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:39:57.013: INFO: namespace projected-8146 deletion completed in 22.101322393s

• [SLOW TEST:30.289 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:39:57.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:40:02.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9767" for this suite.
Apr 29 20:40:08.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:40:09.012: INFO: namespace watch-9767 deletion completed in 6.179511691s

• [SLOW TEST:11.999 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:40:09.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 29 20:40:09.078: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1ba2389-1aab-4f49-833a-7113162aa7c5" in namespace "downward-api-7158" to be "success or failure"
Apr 29 20:40:09.082: INFO: Pod "downwardapi-volume-b1ba2389-1aab-4f49-833a-7113162aa7c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160706ms
Apr 29 20:40:11.086: INFO: Pod "downwardapi-volume-b1ba2389-1aab-4f49-833a-7113162aa7c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008444036s
Apr 29 20:40:13.091: INFO: Pod "downwardapi-volume-b1ba2389-1aab-4f49-833a-7113162aa7c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012699459s
STEP: Saw pod success
Apr 29 20:40:13.091: INFO: Pod "downwardapi-volume-b1ba2389-1aab-4f49-833a-7113162aa7c5" satisfied condition "success or failure"
Apr 29 20:40:13.094: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b1ba2389-1aab-4f49-833a-7113162aa7c5 container client-container: 
STEP: delete the pod
Apr 29 20:40:13.113: INFO: Waiting for pod downwardapi-volume-b1ba2389-1aab-4f49-833a-7113162aa7c5 to disappear
Apr 29 20:40:13.117: INFO: Pod downwardapi-volume-b1ba2389-1aab-4f49-833a-7113162aa7c5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:40:13.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7158" for this suite.
Apr 29 20:40:19.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:40:19.220: INFO: namespace downward-api-7158 deletion completed in 6.099094166s

• [SLOW TEST:10.207 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:40:19.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Apr 29 20:40:19.313: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3008,SelfLink:/api/v1/namespaces/watch-3008/configmaps/e2e-watch-test-configmap-a,UID:ff1a0dc2-0878-4f1a-9a42-05af096247b1,ResourceVersion:2900877,Generation:0,CreationTimestamp:2021-04-29 20:40:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Apr 29 20:40:19.314: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3008,SelfLink:/api/v1/namespaces/watch-3008/configmaps/e2e-watch-test-configmap-a,UID:ff1a0dc2-0878-4f1a-9a42-05af096247b1,ResourceVersion:2900877,Generation:0,CreationTimestamp:2021-04-29 20:40:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Apr 29 20:40:29.322: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3008,SelfLink:/api/v1/namespaces/watch-3008/configmaps/e2e-watch-test-configmap-a,UID:ff1a0dc2-0878-4f1a-9a42-05af096247b1,ResourceVersion:2900897,Generation:0,CreationTimestamp:2021-04-29 20:40:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Apr 29 20:40:29.322: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3008,SelfLink:/api/v1/namespaces/watch-3008/configmaps/e2e-watch-test-configmap-a,UID:ff1a0dc2-0878-4f1a-9a42-05af096247b1,ResourceVersion:2900897,Generation:0,CreationTimestamp:2021-04-29 20:40:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Apr 29 20:40:39.331: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3008,SelfLink:/api/v1/namespaces/watch-3008/configmaps/e2e-watch-test-configmap-a,UID:ff1a0dc2-0878-4f1a-9a42-05af096247b1,ResourceVersion:2900917,Generation:0,CreationTimestamp:2021-04-29 20:40:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Apr 29 20:40:39.331: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3008,SelfLink:/api/v1/namespaces/watch-3008/configmaps/e2e-watch-test-configmap-a,UID:ff1a0dc2-0878-4f1a-9a42-05af096247b1,ResourceVersion:2900917,Generation:0,CreationTimestamp:2021-04-29 20:40:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Apr 29 20:40:49.339: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3008,SelfLink:/api/v1/namespaces/watch-3008/configmaps/e2e-watch-test-configmap-a,UID:ff1a0dc2-0878-4f1a-9a42-05af096247b1,ResourceVersion:2900937,Generation:0,CreationTimestamp:2021-04-29 20:40:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Apr 29 20:40:49.339: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3008,SelfLink:/api/v1/namespaces/watch-3008/configmaps/e2e-watch-test-configmap-a,UID:ff1a0dc2-0878-4f1a-9a42-05af096247b1,ResourceVersion:2900937,Generation:0,CreationTimestamp:2021-04-29 20:40:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Apr 29 20:40:59.347: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3008,SelfLink:/api/v1/namespaces/watch-3008/configmaps/e2e-watch-test-configmap-b,UID:772067c6-9744-48ea-9d54-35826a911a7e,ResourceVersion:2900958,Generation:0,CreationTimestamp:2021-04-29 20:40:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Apr 29 20:40:59.347: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3008,SelfLink:/api/v1/namespaces/watch-3008/configmaps/e2e-watch-test-configmap-b,UID:772067c6-9744-48ea-9d54-35826a911a7e,ResourceVersion:2900958,Generation:0,CreationTimestamp:2021-04-29 20:40:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Apr 29 20:41:09.353: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3008,SelfLink:/api/v1/namespaces/watch-3008/configmaps/e2e-watch-test-configmap-b,UID:772067c6-9744-48ea-9d54-35826a911a7e,ResourceVersion:2900979,Generation:0,CreationTimestamp:2021-04-29 20:40:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Apr 29 20:41:09.353: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3008,SelfLink:/api/v1/namespaces/watch-3008/configmaps/e2e-watch-test-configmap-b,UID:772067c6-9744-48ea-9d54-35826a911a7e,ResourceVersion:2900979,Generation:0,CreationTimestamp:2021-04-29 20:40:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:41:19.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3008" for this suite.
Apr 29 20:41:25.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:41:25.462: INFO: namespace watch-3008 deletion completed in 6.103527998s

• [SLOW TEST:66.242 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:41:25.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Apr 29 20:41:25.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3852'
Apr 29 20:41:25.622: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Apr 29 20:41:25.622: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Apr 29 20:41:25.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-3852'
Apr 29 20:41:25.745: INFO: stderr: ""
Apr 29 20:41:25.745: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:41:25.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3852" for this suite.
Apr 29 20:41:31.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:41:31.854: INFO: namespace kubectl-3852 deletion completed in 6.101303381s

• [SLOW TEST:6.391 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:41:31.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Apr 29 20:41:36.490: INFO: Successfully updated pod "annotationupdate3b4b097a-158a-4940-a321-fe0ef396e22c"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:41:40.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-430" for this suite.
Apr 29 20:42:02.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:42:02.635: INFO: namespace projected-430 deletion completed in 22.113827614s

• [SLOW TEST:30.778 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:42:02.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-07399110-51e0-4321-a1cf-3415cc225476
STEP: Creating a pod to test consume configMaps
Apr 29 20:42:02.713: INFO: Waiting up to 5m0s for pod "pod-configmaps-020af07c-4521-42c5-b94c-c2e199626570" in namespace "configmap-2321" to be "success or failure"
Apr 29 20:42:02.724: INFO: Pod "pod-configmaps-020af07c-4521-42c5-b94c-c2e199626570": Phase="Pending", Reason="", readiness=false. Elapsed: 10.33292ms
Apr 29 20:42:04.728: INFO: Pod "pod-configmaps-020af07c-4521-42c5-b94c-c2e199626570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014648712s
Apr 29 20:42:06.733: INFO: Pod "pod-configmaps-020af07c-4521-42c5-b94c-c2e199626570": Phase="Running", Reason="", readiness=true. Elapsed: 4.019073193s
Apr 29 20:42:08.737: INFO: Pod "pod-configmaps-020af07c-4521-42c5-b94c-c2e199626570": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023512919s
STEP: Saw pod success
Apr 29 20:42:08.737: INFO: Pod "pod-configmaps-020af07c-4521-42c5-b94c-c2e199626570" satisfied condition "success or failure"
Apr 29 20:42:08.740: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-020af07c-4521-42c5-b94c-c2e199626570 container configmap-volume-test: 
STEP: delete the pod
Apr 29 20:42:08.774: INFO: Waiting for pod pod-configmaps-020af07c-4521-42c5-b94c-c2e199626570 to disappear
Apr 29 20:42:08.790: INFO: Pod pod-configmaps-020af07c-4521-42c5-b94c-c2e199626570 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:42:08.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2321" for this suite.
Apr 29 20:42:14.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:42:14.928: INFO: namespace configmap-2321 deletion completed in 6.134290936s

• [SLOW TEST:12.293 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:42:14.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:42:20.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1834" for this suite.
Apr 29 20:42:42.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:42:42.155: INFO: namespace replication-controller-1834 deletion completed in 22.118350095s

• [SLOW TEST:27.226 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:42:42.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Apr 29 20:42:42.228: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3407" to be "success or failure"
Apr 29 20:42:42.239: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.610029ms
Apr 29 20:42:44.244: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016179619s
Apr 29 20:42:46.253: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024981347s
Apr 29 20:42:48.257: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02913861s
STEP: Saw pod success
Apr 29 20:42:48.257: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Apr 29 20:42:48.259: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Apr 29 20:42:48.288: INFO: Waiting for pod pod-host-path-test to disappear
Apr 29 20:42:48.299: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:42:48.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-3407" for this suite.
Apr 29 20:42:54.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:42:54.411: INFO: namespace hostpath-3407 deletion completed in 6.108240213s

• [SLOW TEST:12.255 seconds]
[sig-storage] HostPath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:42:54.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3118
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Apr 29 20:42:54.461: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Apr 29 20:43:16.629: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.197:8080/dial?request=hostName&protocol=http&host=10.244.1.104&port=8080&tries=1'] Namespace:pod-network-test-3118 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:43:16.629: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:43:16.787: INFO: Waiting for endpoints: map[]
Apr 29 20:43:16.791: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.197:8080/dial?request=hostName&protocol=http&host=10.244.2.196&port=8080&tries=1'] Namespace:pod-network-test-3118 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Apr 29 20:43:16.791: INFO: >>> kubeConfig: /root/.kube/config
Apr 29 20:43:16.930: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:43:16.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3118" for this suite.
Apr 29 20:43:38.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:43:39.070: INFO: namespace pod-network-test-3118 deletion completed in 22.136578975s

• [SLOW TEST:44.659 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:43:39.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Apr 29 20:43:39.116: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9006289-cfb9-45c3-9695-e66de6317744" in namespace "projected-8929" to be "success or failure"
Apr 29 20:43:39.136: INFO: Pod "downwardapi-volume-b9006289-cfb9-45c3-9695-e66de6317744": Phase="Pending", Reason="", readiness=false. Elapsed: 19.93542ms
Apr 29 20:43:41.146: INFO: Pod "downwardapi-volume-b9006289-cfb9-45c3-9695-e66de6317744": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029795468s
Apr 29 20:43:43.182: INFO: Pod "downwardapi-volume-b9006289-cfb9-45c3-9695-e66de6317744": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065813402s
STEP: Saw pod success
Apr 29 20:43:43.182: INFO: Pod "downwardapi-volume-b9006289-cfb9-45c3-9695-e66de6317744" satisfied condition "success or failure"
Apr 29 20:43:43.184: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b9006289-cfb9-45c3-9695-e66de6317744 container client-container: 
STEP: delete the pod
Apr 29 20:43:43.236: INFO: Waiting for pod downwardapi-volume-b9006289-cfb9-45c3-9695-e66de6317744 to disappear
Apr 29 20:43:43.240: INFO: Pod downwardapi-volume-b9006289-cfb9-45c3-9695-e66de6317744 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:43:43.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8929" for this suite.
Apr 29 20:43:49.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:43:49.348: INFO: namespace projected-8929 deletion completed in 6.104918477s

• [SLOW TEST:10.278 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Apr 29 20:43:49.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Apr 29 20:43:49.472: INFO: Waiting up to 5m0s for pod "downward-api-fd4d415f-5858-47a7-bb25-d9cfc5e5766f" in namespace "downward-api-5575" to be "success or failure"
Apr 29 20:43:49.479: INFO: Pod "downward-api-fd4d415f-5858-47a7-bb25-d9cfc5e5766f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.506442ms
Apr 29 20:43:51.491: INFO: Pod "downward-api-fd4d415f-5858-47a7-bb25-d9cfc5e5766f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019343407s
Apr 29 20:43:53.495: INFO: Pod "downward-api-fd4d415f-5858-47a7-bb25-d9cfc5e5766f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023480366s
STEP: Saw pod success
Apr 29 20:43:53.495: INFO: Pod "downward-api-fd4d415f-5858-47a7-bb25-d9cfc5e5766f" satisfied condition "success or failure"
Apr 29 20:43:53.498: INFO: Trying to get logs from node iruya-worker pod downward-api-fd4d415f-5858-47a7-bb25-d9cfc5e5766f container dapi-container: 
STEP: delete the pod
Apr 29 20:43:53.666: INFO: Waiting for pod downward-api-fd4d415f-5858-47a7-bb25-d9cfc5e5766f to disappear
Apr 29 20:43:53.683: INFO: Pod downward-api-fd4d415f-5858-47a7-bb25-d9cfc5e5766f no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Apr 29 20:43:53.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5575" for this suite.
Apr 29 20:43:59.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 29 20:43:59.871: INFO: namespace downward-api-5575 deletion completed in 6.183916381s

• [SLOW TEST:10.522 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 29 20:43:59.872: INFO: Running AfterSuite actions on all nodes
Apr 29 20:43:59.872: INFO: Running AfterSuite actions on node 1
Apr 29 20:43:59.872: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 5969.830 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS