I0511 12:55:52.541473 7 e2e.go:243] Starting e2e run "a7b21891-60b6-419d-84f9-2f1a64e0dd5b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589201751 - Will randomize all specs Will run 215 of 4412 specs May 11 12:55:52.721: INFO: >>> kubeConfig: /root/.kube/config May 11 12:55:52.723: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 11 12:55:52.743: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 11 12:55:52.766: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 11 12:55:52.766: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 11 12:55:52.766: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 11 12:55:52.772: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 11 12:55:52.772: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 11 12:55:52.772: INFO: e2e test version: v1.15.11 May 11 12:55:52.773: INFO: kube-apiserver version: v1.15.7 SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 12:55:52.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 11 12:55:52.840: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 12:55:52.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5556' May 11 12:55:55.483: INFO: stderr: "" May 11 12:55:55.483: INFO: stdout: "replicationcontroller/redis-master created\n" May 11 12:55:55.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5556' May 11 12:55:55.941: INFO: stderr: "" May 11 12:55:55.941: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 11 12:55:56.944: INFO: Selector matched 1 pods for map[app:redis] May 11 12:55:56.945: INFO: Found 0 / 1 May 11 12:55:57.945: INFO: Selector matched 1 pods for map[app:redis] May 11 12:55:57.945: INFO: Found 0 / 1 May 11 12:55:58.995: INFO: Selector matched 1 pods for map[app:redis] May 11 12:55:58.995: INFO: Found 0 / 1 May 11 12:55:59.959: INFO: Selector matched 1 pods for map[app:redis] May 11 12:55:59.959: INFO: Found 1 / 1 May 11 12:55:59.959: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 12:55:59.976: INFO: Selector matched 1 pods for map[app:redis] May 11 12:55:59.976: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 12:55:59.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-lm26w --namespace=kubectl-5556' May 11 12:56:00.151: INFO: stderr: "" May 11 12:56:00.151: INFO: stdout: "Name: redis-master-lm26w\nNamespace: kubectl-5556\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Mon, 11 May 2020 12:55:55 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.119\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://889dcc232a62a512f17b9a696b5f9425bfba0379c4515ae963d4bacda7c71265\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 11 May 2020 12:55:59 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-7gp65 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-7gp65:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-7gp65\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-5556/redis-master-lm26w to iruya-worker\n Normal Pulled 4s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" May 11 12:56:00.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-5556' May 11 12:56:00.256: INFO: stderr: "" May 11 12:56:00.257: INFO: stdout: "Name: redis-master\nNamespace: kubectl-5556\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-lm26w\n" May 11 12:56:00.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-5556' May 11 12:56:00.352: INFO: stderr: "" May 11 12:56:00.352: INFO: stdout: "Name: redis-master\nNamespace: kubectl-5556\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.99.129.212\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.119:6379\nSession Affinity: None\nEvents: \n" May 11 12:56:00.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 11 12:56:00.474: INFO: stderr: "" May 11 12:56:00.474: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 11 May 2020 12:55:08 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 11 May 2020 12:55:08 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 11 May 2020 12:55:08 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 11 May 2020 12:55:08 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 56d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 56d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 56d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 11 12:56:00.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5556' May 11 12:56:00.580: INFO: stderr: "" May 11 12:56:00.580: INFO: stdout: "Name: kubectl-5556\nLabels: e2e-framework=kubectl\n e2e-run=a7b21891-60b6-419d-84f9-2f1a64e0dd5b\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 12:56:00.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5556" for this suite. May 11 12:56:24.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 12:56:24.722: INFO: namespace kubectl-5556 deletion completed in 24.137036141s • [SLOW TEST:31.949 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 12:56:24.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-137, will wait for the garbage collector to delete the pods May 11 12:56:30.946: INFO: Deleting Job.batch foo took: 5.250441ms May 11 12:56:31.247: INFO: Terminating Job.batch foo pods took: 300.268438ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 12:57:12.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-137" for this suite. May 11 12:57:18.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 12:57:18.552: INFO: namespace job-137 deletion completed in 6.188210801s • [SLOW TEST:53.829 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 12:57:18.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 12:57:18.645: INFO: Waiting up to 5m0s for pod "pod-4a30ccfc-f4fd-4a0c-9ede-44d382f22c88" in namespace "emptydir-112" to be "success or failure" May 11 12:57:18.649: INFO: Pod "pod-4a30ccfc-f4fd-4a0c-9ede-44d382f22c88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.548192ms May 11 12:57:20.786: INFO: Pod "pod-4a30ccfc-f4fd-4a0c-9ede-44d382f22c88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141694111s May 11 12:57:22.790: INFO: Pod "pod-4a30ccfc-f4fd-4a0c-9ede-44d382f22c88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.145724331s STEP: Saw pod success May 11 12:57:22.790: INFO: Pod "pod-4a30ccfc-f4fd-4a0c-9ede-44d382f22c88" satisfied condition "success or failure" May 11 12:57:22.793: INFO: Trying to get logs from node iruya-worker pod pod-4a30ccfc-f4fd-4a0c-9ede-44d382f22c88 container test-container: STEP: delete the pod May 11 12:57:22.813: INFO: Waiting for pod pod-4a30ccfc-f4fd-4a0c-9ede-44d382f22c88 to disappear May 11 12:57:22.818: INFO: Pod pod-4a30ccfc-f4fd-4a0c-9ede-44d382f22c88 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 12:57:22.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-112" for this suite. May 11 12:57:28.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 12:57:28.903: INFO: namespace emptydir-112 deletion completed in 6.082546322s • [SLOW TEST:10.352 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 12:57:28.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9444 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9444 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9444 May 11 12:57:29.040: INFO: Found 0 stateful pods, waiting for 1 May 11 12:57:39.042: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 11 12:57:39.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 12:57:39.302: INFO: stderr: "I0511 12:57:39.163100 181 log.go:172] (0xc000116e70) (0xc0006928c0) Create stream\nI0511 12:57:39.163150 181 log.go:172] (0xc000116e70) (0xc0006928c0) Stream added, broadcasting: 1\nI0511 12:57:39.165622 181 log.go:172] (0xc000116e70) Reply frame received for 1\nI0511 12:57:39.165657 181 log.go:172] (0xc000116e70) (0xc000976000) Create stream\nI0511 12:57:39.165673 181 log.go:172] (0xc000116e70) (0xc000976000) Stream added, broadcasting: 3\nI0511 12:57:39.166417 181 log.go:172] (0xc000116e70) Reply frame received for 3\nI0511 12:57:39.166441 181 log.go:172] (0xc000116e70) (0xc0009ca000) Create stream\nI0511 12:57:39.166454 181 log.go:172] (0xc000116e70) (0xc0009ca000) Stream added, broadcasting: 5\nI0511 12:57:39.167051 181 log.go:172] (0xc000116e70) Reply frame received for 5\nI0511 12:57:39.234574 181 log.go:172] (0xc000116e70) Data frame received for 5\nI0511 12:57:39.234591 181 log.go:172] (0xc0009ca000) (5) Data frame handling\nI0511 12:57:39.234608 181 log.go:172] (0xc0009ca000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 12:57:39.295806 181 log.go:172] (0xc000116e70) Data frame received for 5\nI0511 12:57:39.295847 181 log.go:172] (0xc0009ca000) (5) Data frame handling\nI0511 12:57:39.295893 181 log.go:172] (0xc000116e70) Data frame received for 3\nI0511 12:57:39.295935 181 log.go:172] (0xc000976000) (3) Data frame handling\nI0511 12:57:39.295991 181 log.go:172] (0xc000976000) (3) Data frame sent\nI0511 12:57:39.296013 181 log.go:172] (0xc000116e70) Data frame received for 3\nI0511 12:57:39.296034 181 log.go:172] (0xc000976000) (3) Data frame handling\nI0511 12:57:39.297262 181 log.go:172] (0xc000116e70) Data frame received for 1\nI0511 12:57:39.297296 181 log.go:172] (0xc0006928c0) (1) Data frame handling\nI0511 12:57:39.297316 181 log.go:172] (0xc0006928c0) (1) Data frame sent\nI0511 12:57:39.297335 181 log.go:172] (0xc000116e70) (0xc0006928c0) Stream removed, broadcasting: 1\nI0511 12:57:39.297489 181 log.go:172] (0xc000116e70) Go away received\nI0511 12:57:39.297812 181 log.go:172] (0xc000116e70) (0xc0006928c0) Stream removed, broadcasting: 1\nI0511 12:57:39.297832 181 log.go:172] (0xc000116e70) (0xc000976000) Stream removed, broadcasting: 3\nI0511 12:57:39.297843 181 log.go:172] (0xc000116e70) (0xc0009ca000) Stream removed, broadcasting: 5\n" May 11 12:57:39.302: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 12:57:39.302: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 12:57:39.343: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 12:57:49.397: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 12:57:49.397: INFO: Waiting for statefulset status.replicas updated to 0 May 11 12:57:49.463: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999505s May 11 12:57:50.466: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.941482874s May 11 12:57:51.469: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.938386289s May 11 12:57:52.473: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.935551571s May 11 12:57:53.482: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.931688672s May 11 12:57:54.487: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.922864895s May 11 12:57:55.492: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.917873859s May 11 12:57:56.497: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.912562796s May 11 12:57:57.501: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.907362335s May 11 12:57:58.505: INFO: Verifying statefulset ss doesn't scale past 1 for another 903.450743ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9444 May 11 12:57:59.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 12:57:59.765: INFO: stderr: "I0511 12:57:59.653530 201 log.go:172] (0xc00099e370) (0xc0002d0960) Create stream\nI0511 12:57:59.653596 201 log.go:172] (0xc00099e370) (0xc0002d0960) Stream added, broadcasting: 1\nI0511 12:57:59.656081 201 log.go:172] (0xc00099e370) Reply frame received for 1\nI0511 12:57:59.656120 201 log.go:172] (0xc00099e370) (0xc0002d0a00) Create stream\nI0511 12:57:59.656132 201 log.go:172] (0xc00099e370) (0xc0002d0a00) Stream added, broadcasting: 3\nI0511 12:57:59.657064 201 log.go:172] (0xc00099e370) Reply frame received for 3\nI0511 12:57:59.657106 201 log.go:172] (0xc00099e370) (0xc0002d0aa0) Create stream\nI0511 12:57:59.657307 201 log.go:172] (0xc00099e370) (0xc0002d0aa0) Stream added, broadcasting: 5\nI0511 12:57:59.658188 201 log.go:172] (0xc00099e370) Reply frame received for 5\nI0511 12:57:59.759792 201 log.go:172] (0xc00099e370) Data frame received for 3\nI0511 12:57:59.759817 201 log.go:172] (0xc0002d0a00) (3) Data frame handling\nI0511 12:57:59.759825 201 log.go:172] (0xc0002d0a00) (3) Data frame sent\nI0511 12:57:59.759830 201 log.go:172] (0xc00099e370) Data frame received for 3\nI0511 12:57:59.759834 201 log.go:172] (0xc0002d0a00) (3) Data frame handling\nI0511 12:57:59.759863 201 log.go:172] (0xc00099e370) Data frame received for 5\nI0511 12:57:59.759893 201 log.go:172] (0xc0002d0aa0) (5) Data frame handling\nI0511 12:57:59.759922 201 log.go:172] (0xc0002d0aa0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 12:57:59.760021 201 log.go:172] (0xc00099e370) Data frame received for 5\nI0511 12:57:59.760043 201 log.go:172] (0xc0002d0aa0) (5) Data frame handling\nI0511 12:57:59.761344 201 log.go:172] (0xc00099e370) Data frame received for 1\nI0511 12:57:59.761355 201 log.go:172] (0xc0002d0960) (1) Data frame handling\nI0511 12:57:59.761361 201 log.go:172] (0xc0002d0960) (1) Data frame sent\nI0511 12:57:59.761530 201 log.go:172] (0xc00099e370) (0xc0002d0960) Stream removed, broadcasting: 1\nI0511 12:57:59.761834 201 log.go:172] (0xc00099e370) (0xc0002d0960) Stream removed, broadcasting: 1\nI0511 12:57:59.761848 201 log.go:172] (0xc00099e370) (0xc0002d0a00) Stream removed, broadcasting: 3\nI0511 12:57:59.761855 201 log.go:172] (0xc00099e370) (0xc0002d0aa0) Stream removed, broadcasting: 5\n" May 11 12:57:59.765: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 12:57:59.765: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 12:57:59.769: INFO: Found 1 stateful pods, waiting for 3 May 11 12:58:09.773: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 12:58:09.773: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 12:58:09.773: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 11 12:58:09.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 12:58:10.001: INFO: stderr: "I0511 12:58:09.903405 220 log.go:172] (0xc0009e8370) (0xc000938640) Create stream\nI0511 12:58:09.903450 220 log.go:172] (0xc0009e8370) (0xc000938640) Stream added, broadcasting: 1\nI0511 12:58:09.905561 220 log.go:172] (0xc0009e8370) Reply frame received for 1\nI0511 12:58:09.905612 220 log.go:172] (0xc0009e8370) (0xc0007ba000) Create stream\nI0511 12:58:09.905641 220 log.go:172] (0xc0009e8370) (0xc0007ba000) Stream added, broadcasting: 3\nI0511 12:58:09.906427 220 log.go:172] (0xc0009e8370) Reply frame received for 3\nI0511 12:58:09.906466 220 log.go:172] (0xc0009e8370) (0xc00064e460) Create stream\nI0511 12:58:09.906490 220 log.go:172] (0xc0009e8370) (0xc00064e460) Stream added, broadcasting: 5\nI0511 12:58:09.907611 220 log.go:172] (0xc0009e8370) Reply frame received for 5\nI0511 12:58:09.995541 220 log.go:172] (0xc0009e8370) Data frame received for 5\nI0511 12:58:09.995559 220 log.go:172] (0xc00064e460) (5) Data frame handling\nI0511 12:58:09.995566 220 log.go:172] (0xc00064e460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 12:58:09.995579 220 log.go:172] (0xc0009e8370) Data frame received for 3\nI0511 12:58:09.995584 220 log.go:172] (0xc0007ba000) (3) Data frame handling\nI0511 12:58:09.995590 220 log.go:172] (0xc0007ba000) (3) Data frame sent\nI0511 12:58:09.995594 220 log.go:172] (0xc0009e8370) Data frame received for 3\nI0511 12:58:09.995599 220 log.go:172] (0xc0007ba000) (3) Data frame handling\nI0511 12:58:09.995701 220 log.go:172] (0xc0009e8370) Data frame received for 5\nI0511 12:58:09.995731 220 log.go:172] (0xc00064e460) (5) Data frame handling\nI0511 12:58:09.997331 220 log.go:172] (0xc0009e8370) Data frame received for 1\nI0511 12:58:09.997360 220 log.go:172] (0xc000938640) (1) Data frame handling\nI0511 12:58:09.997372 220 log.go:172] (0xc000938640) (1) Data frame sent\nI0511 12:58:09.997393 220 log.go:172] (0xc0009e8370) (0xc000938640) Stream removed, broadcasting: 1\nI0511 12:58:09.997442 220 log.go:172] (0xc0009e8370) Go away received\nI0511 12:58:09.997791 220 log.go:172] (0xc0009e8370) (0xc000938640) Stream removed, broadcasting: 1\nI0511 12:58:09.997815 220 log.go:172] (0xc0009e8370) (0xc0007ba000) Stream removed, broadcasting: 3\nI0511 12:58:09.997827 220 log.go:172] (0xc0009e8370) (0xc00064e460) Stream removed, broadcasting: 5\n" May 11 12:58:10.001: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 12:58:10.001: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 12:58:10.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 12:58:10.227: INFO: stderr: "I0511 12:58:10.120429 241 log.go:172] (0xc000a1e420) (0xc0008b6640) Create stream\nI0511 12:58:10.120468 241 log.go:172] (0xc000a1e420) (0xc0008b6640) Stream added, broadcasting: 1\nI0511 12:58:10.122300 241 log.go:172] (0xc000a1e420) Reply frame received for 1\nI0511 12:58:10.122351 241 log.go:172] (0xc000a1e420) (0xc0008d8000) Create stream\nI0511 12:58:10.122379 241 log.go:172] (0xc000a1e420) (0xc0008d8000) Stream added, broadcasting: 3\nI0511 12:58:10.123217 241 log.go:172] (0xc000a1e420) Reply frame received for 3\nI0511 12:58:10.123250 241 log.go:172] (0xc000a1e420) (0xc0008b66e0) Create stream\nI0511 12:58:10.123262 241 log.go:172] (0xc000a1e420) (0xc0008b66e0) Stream added, broadcasting: 5\nI0511 12:58:10.124033 241 log.go:172] (0xc000a1e420) Reply frame received for 5\nI0511 12:58:10.190257 241 log.go:172] (0xc000a1e420) Data frame received for 5\nI0511 12:58:10.190292 241 log.go:172] (0xc0008b66e0) (5) Data frame handling\nI0511 12:58:10.190319 241 log.go:172] (0xc0008b66e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 12:58:10.220686 241 log.go:172] (0xc000a1e420) Data frame received for 5\nI0511 12:58:10.220724 241 log.go:172] (0xc0008b66e0) (5) Data frame handling\nI0511 12:58:10.220752 241 log.go:172] (0xc000a1e420) Data frame received for 3\nI0511 12:58:10.220835 241 log.go:172] (0xc0008d8000) (3) Data frame handling\nI0511 12:58:10.220884 241 log.go:172] (0xc0008d8000) (3) Data frame sent\nI0511 12:58:10.220905 241 log.go:172] (0xc000a1e420) Data frame received for 3\nI0511 12:58:10.221035 241 log.go:172] (0xc0008d8000) (3) Data frame handling\nI0511 12:58:10.222573 241 log.go:172] (0xc000a1e420) Data frame received for 1\nI0511 12:58:10.222587 241 log.go:172] (0xc0008b6640) (1) Data frame handling\nI0511 12:58:10.222599 241 log.go:172] (0xc0008b6640) (1) Data frame sent\nI0511 12:58:10.222650 241 log.go:172] (0xc000a1e420) (0xc0008b6640) Stream removed, broadcasting: 1\nI0511 12:58:10.222691 241 log.go:172] (0xc000a1e420) Go away received\nI0511 12:58:10.222968 241 log.go:172] (0xc000a1e420) (0xc0008b6640) Stream removed, broadcasting: 1\nI0511 12:58:10.222986 241 log.go:172] (0xc000a1e420) (0xc0008d8000) Stream removed, broadcasting: 3\nI0511 12:58:10.222998 241 log.go:172] (0xc000a1e420) (0xc0008b66e0) Stream removed, broadcasting: 5\n" May 11 12:58:10.227: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 12:58:10.227: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 12:58:10.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 12:58:10.435: INFO: stderr: "I0511 12:58:10.349963 260 log.go:172] (0xc0008e2420) (0xc00052f540) Create stream\nI0511 12:58:10.350003 260 log.go:172] (0xc0008e2420) (0xc00052f540) Stream added, broadcasting: 1\nI0511 12:58:10.351604 260 log.go:172] (0xc0008e2420) Reply frame received for 1\nI0511 12:58:10.351652 260 log.go:172] (0xc0008e2420) (0xc00003a3c0) Create stream\nI0511 12:58:10.351671 260 log.go:172] (0xc0008e2420) (0xc00003a3c0) Stream added, broadcasting: 3\nI0511 12:58:10.352515 260 log.go:172] (0xc0008e2420) Reply frame received for 3\nI0511 12:58:10.352554 260 log.go:172] (0xc0008e2420) (0xc00003a460) Create stream\nI0511 12:58:10.352573 260 log.go:172] (0xc0008e2420) (0xc00003a460) Stream added, broadcasting: 5\nI0511 12:58:10.353748 260 log.go:172] (0xc0008e2420) Reply frame received for 5\nI0511 12:58:10.407588 260 log.go:172] (0xc0008e2420) Data frame received for 5\nI0511 12:58:10.407613 260 log.go:172] (0xc00003a460) (5) Data frame handling\nI0511 12:58:10.407648 260 log.go:172] (0xc00003a460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 12:58:10.430500 260 log.go:172] (0xc0008e2420) Data frame received for 5\nI0511 12:58:10.430538 260 log.go:172] (0xc00003a460) (5) Data frame handling\nI0511 12:58:10.430564 260 log.go:172] (0xc0008e2420) Data frame received for 3\nI0511 12:58:10.430580 260 log.go:172] (0xc00003a3c0) (3) Data frame handling\nI0511 12:58:10.430595 260 log.go:172] (0xc00003a3c0) (3) Data frame sent\nI0511 12:58:10.430614 260 log.go:172] (0xc0008e2420) Data frame received for 3\nI0511 12:58:10.430623 260 log.go:172] (0xc00003a3c0) (3) Data frame handling\nI0511 12:58:10.431895 260 log.go:172] (0xc0008e2420) Data frame received for 1\nI0511 12:58:10.431913 260 log.go:172] (0xc00052f540) (1) Data frame handling\nI0511 12:58:10.431921 260 log.go:172] (0xc00052f540) (1) Data frame sent\nI0511 12:58:10.431947 260 log.go:172] (0xc0008e2420) (0xc00052f540) Stream removed, broadcasting: 1\nI0511 12:58:10.431966 260 log.go:172] (0xc0008e2420) Go away received\nI0511 12:58:10.432189 260 log.go:172] (0xc0008e2420) (0xc00052f540) Stream removed, broadcasting: 1\nI0511 12:58:10.432213 260 log.go:172] (0xc0008e2420) (0xc00003a3c0) Stream removed, broadcasting: 3\nI0511 12:58:10.432221 260 log.go:172] (0xc0008e2420) (0xc00003a460) Stream removed, broadcasting: 5\n" May 11 12:58:10.436: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 12:58:10.436: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 12:58:10.436: INFO: Waiting for statefulset status.replicas updated to 0 May 11 12:58:10.439: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 11 12:58:20.447: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 12:58:20.447: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 12:58:20.447: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 12:58:20.486: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999976s May 11 12:58:21.490: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.967726267s May 11 12:58:22.494: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.963469153s May 11 12:58:23.499: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.959773554s May 11 12:58:24.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.954130447s May 11 12:58:25.565: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.893628485s May 11 12:58:26.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.888640859s May 11 12:58:27.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.884199454s May 11 12:58:28.577: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.880250964s May 11 12:58:29.582: INFO: Verifying statefulset ss doesn't scale past 3 for another 876.276954ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9444 May 11 12:58:30.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 12:58:30.795: INFO: stderr: "I0511 12:58:30.711148 280 log.go:172] (0xc000918420) (0xc000910820) Create stream\nI0511 12:58:30.711200 280 log.go:172] (0xc000918420) (0xc000910820) Stream added, broadcasting: 1\nI0511 12:58:30.714950 280 log.go:172] (0xc000918420) Reply frame received for 1\nI0511 12:58:30.714993 280 log.go:172] (0xc000918420) (0xc0009dc000) Create stream\nI0511 12:58:30.715005 280 log.go:172] (0xc000918420) (0xc0009dc000) Stream added, broadcasting: 3\nI0511 12:58:30.715652 280 log.go:172] (0xc000918420) Reply frame received for 3\nI0511 12:58:30.715695 280 log.go:172] (0xc000918420) (0xc000910000) Create stream\nI0511 12:58:30.715705 280 log.go:172] (0xc000918420) (0xc000910000) Stream added, broadcasting: 5\nI0511 12:58:30.716525 280 log.go:172] (0xc000918420) Reply frame received for 5\nI0511 12:58:30.791220 280 log.go:172] (0xc000918420) Data frame received for 3\nI0511 12:58:30.791250 280 log.go:172] (0xc0009dc000) (3) Data frame handling\nI0511 12:58:30.791258 280 log.go:172] (0xc0009dc000) (3) Data frame sent\nI0511 12:58:30.791264 280 log.go:172] (0xc000918420) Data frame received for 3\nI0511 12:58:30.791278 280 log.go:172] (0xc0009dc000) (3) Data frame handling\nI0511 12:58:30.791309 280 log.go:172] (0xc000918420) Data frame received for 5\nI0511 12:58:30.791325 280 log.go:172] (0xc000910000) (5) Data frame handling\nI0511 12:58:30.791344 280 log.go:172] (0xc000910000) (5) Data frame sent\nI0511 12:58:30.791357 280 log.go:172] (0xc000918420) Data frame received for 5\nI0511 12:58:30.791365 280 log.go:172] (0xc000910000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 12:58:30.792539 280 log.go:172] (0xc000918420) Data frame received for 1\nI0511 12:58:30.792560 280 log.go:172] (0xc000910820) (1) Data frame handling\nI0511 12:58:30.792579 280 log.go:172] (0xc000910820) (1) Data frame sent\nI0511 12:58:30.792596 280 log.go:172] (0xc000918420) (0xc000910820) Stream removed, broadcasting: 1\nI0511 12:58:30.792615 280 log.go:172] (0xc000918420) Go away received\nI0511 12:58:30.792912 280 log.go:172] (0xc000918420) (0xc000910820) Stream removed, broadcasting: 1\nI0511 12:58:30.792940 280 log.go:172] (0xc000918420) (0xc0009dc000) Stream removed, broadcasting: 3\nI0511 12:58:30.792948 280 log.go:172] (0xc000918420) (0xc000910000) Stream removed, broadcasting: 5\n" May 11 12:58:30.795: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 12:58:30.796: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 12:58:30.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 12:58:30.991: INFO: stderr: "I0511 12:58:30.921403 300 log.go:172] (0xc00096e2c0) (0xc0008d06e0) Create stream\nI0511 12:58:30.921459 300 log.go:172] (0xc00096e2c0) (0xc0008d06e0) Stream added, broadcasting: 1\nI0511 12:58:30.924151 300 log.go:172] (0xc00096e2c0) Reply frame received for 1\nI0511 12:58:30.924214 300 log.go:172] (0xc00096e2c0) (0xc0009b8000) Create stream\nI0511 12:58:30.924241 300 log.go:172] (0xc00096e2c0) (0xc0009b8000) Stream added, broadcasting: 3\nI0511 12:58:30.925461 300 log.go:172] (0xc00096e2c0) Reply frame received for 3\nI0511 12:58:30.925493 300 log.go:172] (0xc00096e2c0) (0xc0008d0780) Create stream\nI0511 12:58:30.925503 300 log.go:172] (0xc00096e2c0) (0xc0008d0780) Stream added, broadcasting: 5\nI0511 12:58:30.926307 300 log.go:172] (0xc00096e2c0) Reply frame received for 5\nI0511 12:58:30.985793 300 log.go:172] (0xc00096e2c0) Data frame received for 5\nI0511 12:58:30.985835 300 log.go:172] (0xc0008d0780) (5) Data frame handling\nI0511 12:58:30.985845 300 log.go:172] (0xc0008d0780) (5) Data frame sent\nI0511 12:58:30.985853 300 log.go:172] (0xc00096e2c0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 12:58:30.985881 300 log.go:172] (0xc00096e2c0) Data frame received for 3\nI0511 12:58:30.985924 300 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0511 12:58:30.985946 300 log.go:172] (0xc0009b8000) (3) Data frame sent\nI0511 12:58:30.985964 300 log.go:172] (0xc00096e2c0) Data frame received for 3\nI0511 12:58:30.985979 300 log.go:172] (0xc0009b8000) (3) Data frame handling\nI0511 12:58:30.986001 300 log.go:172] (0xc0008d0780) (5) Data frame handling\nI0511 12:58:30.987574 300 log.go:172] (0xc00096e2c0) Data frame received for 1\nI0511 12:58:30.987588 300 log.go:172] (0xc0008d06e0) (1) Data frame handling\nI0511 12:58:30.987602 300 log.go:172] (0xc0008d06e0) (1) Data frame sent\nI0511 12:58:30.987612 300 log.go:172] (0xc00096e2c0) (0xc0008d06e0) Stream removed, broadcasting: 1\nI0511 12:58:30.987750 300 log.go:172] (0xc00096e2c0) Go away received\nI0511 12:58:30.987938 300 log.go:172] (0xc00096e2c0) (0xc0008d06e0) Stream removed, broadcasting: 1\nI0511 12:58:30.987955 300 log.go:172] (0xc00096e2c0) (0xc0009b8000) Stream removed, broadcasting: 3\nI0511 12:58:30.987963 300 log.go:172] (0xc00096e2c0) (0xc0008d0780) Stream removed, broadcasting: 5\n" May 11 12:58:30.991: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 12:58:30.991: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 12:58:30.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 12:58:31.207: INFO: rc: 1 May 11 12:58:31.207: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] I0511 12:58:31.140906 320 log.go:172] (0xc0008220b0) (0xc000764640) Create stream I0511 12:58:31.140953 320 log.go:172] (0xc0008220b0) (0xc000764640) Stream added, broadcasting: 1 I0511 12:58:31.143172 320 log.go:172] (0xc0008220b0) Reply frame received for 1 I0511 12:58:31.143199 320 log.go:172] (0xc0008220b0) (0xc0007ca140) Create stream I0511 12:58:31.143211 320 log.go:172] (0xc0008220b0) (0xc0007ca140) Stream added, broadcasting: 3 I0511 12:58:31.143948 320 log.go:172] (0xc0008220b0) Reply frame received for 3 I0511 12:58:31.143986 320 log.go:172] (0xc0008220b0) (0xc000764780) Create stream I0511 12:58:31.144002 320 log.go:172] (0xc0008220b0) (0xc000764780) Stream added, broadcasting: 5 I0511 12:58:31.144676 320 log.go:172] (0xc0008220b0) Reply frame received for 5 I0511 12:58:31.203354 320 log.go:172] (0xc0008220b0) (0xc0007ca140) Stream removed, broadcasting: 3 I0511 12:58:31.203398 320 log.go:172] (0xc0008220b0) Data frame received for 1 I0511 12:58:31.203416 320 log.go:172] (0xc000764640) (1) Data frame handling I0511 12:58:31.203430 320 log.go:172] (0xc0008220b0) (0xc000764780) Stream removed, broadcasting: 5 I0511 12:58:31.203475 320 log.go:172] (0xc000764640) (1) Data frame sent I0511 12:58:31.203496 320 log.go:172] (0xc0008220b0) (0xc000764640) Stream removed, broadcasting: 1 I0511 12:58:31.203508 320 log.go:172] (0xc0008220b0) Go away received I0511 12:58:31.203755 320 log.go:172] (0xc0008220b0) (0xc000764640) Stream removed, broadcasting: 1 I0511 12:58:31.203767 320 log.go:172] (0xc0008220b0) (0xc0007ca140) Stream removed, broadcasting: 3 I0511 12:58:31.203773 320 log.go:172] (0xc0008220b0) (0xc000764780) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "41a0ce696c5d7bade77d1c14d13c97434d836517fc635e13e6d6a786de664f3c": OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "process_linux.go:101: executing setns process caused \"exit status 1\"": unknown [] 0xc001c509c0 exit status 1 true [0xc0009d0ea8 0xc0009d0f60 0xc0009d1020] [0xc0009d0ea8 0xc0009d0f60 0xc0009d1020] [0xc0009d0f28 0xc0009d1010] [0xba70e0 0xba70e0] 0xc001e9a420 }: Command stdout: stderr: I0511 12:58:31.140906 320 log.go:172] (0xc0008220b0) (0xc000764640) Create stream I0511 12:58:31.140953 320 log.go:172] (0xc0008220b0) (0xc000764640) Stream added, broadcasting: 1 I0511 12:58:31.143172 320 log.go:172] (0xc0008220b0) Reply frame received for 1 I0511 12:58:31.143199 320 log.go:172] (0xc0008220b0) (0xc0007ca140) Create stream I0511 12:58:31.143211 320 log.go:172] (0xc0008220b0) (0xc0007ca140) Stream added, broadcasting: 3 I0511 12:58:31.143948 320 log.go:172] (0xc0008220b0) Reply frame received for 3 I0511 12:58:31.143986 320 log.go:172] (0xc0008220b0) (0xc000764780) Create stream I0511 12:58:31.144002 320 log.go:172] (0xc0008220b0) (0xc000764780) Stream added, broadcasting: 5 I0511 12:58:31.144676 320 log.go:172] (0xc0008220b0) Reply frame received for 5 I0511 12:58:31.203354 320 log.go:172] (0xc0008220b0) (0xc0007ca140) Stream removed, broadcasting: 3 I0511 12:58:31.203398 320 log.go:172] (0xc0008220b0) Data frame received for 1 I0511 12:58:31.203416 320 log.go:172] (0xc000764640) (1) Data frame handling I0511 12:58:31.203430 320 log.go:172] (0xc0008220b0) (0xc000764780) Stream removed, broadcasting: 5 I0511 12:58:31.203475 320 log.go:172] (0xc000764640) (1) Data frame sent I0511 12:58:31.203496 320 log.go:172] (0xc0008220b0) (0xc000764640) Stream removed, broadcasting: 1 I0511 12:58:31.203508 320 log.go:172] (0xc0008220b0) Go away received I0511 12:58:31.203755 320 log.go:172] (0xc0008220b0) (0xc000764640) Stream removed, broadcasting: 1 I0511 12:58:31.203767 320 log.go:172] (0xc0008220b0) (0xc0007ca140) Stream removed, broadcasting: 3 I0511 12:58:31.203773 320 log.go:172] (0xc0008220b0) (0xc000764780) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "41a0ce696c5d7bade77d1c14d13c97434d836517fc635e13e6d6a786de664f3c": OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "process_linux.go:101: executing setns process caused \"exit status 1\"": unknown error: exit status 1 May 11 12:58:41.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 12:58:41.323: INFO: rc: 1 May 11 12:58:41.323: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00253cc00 exit status 1 true [0xc002cea4f8 0xc002cea510 0xc002cea528] [0xc002cea4f8 0xc002cea510 0xc002cea528] [0xc002cea508 0xc002cea520] [0xba70e0 0xba70e0] 0xc001d5d500 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 11 12:58:51.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 12:58:51.415: INFO: rc: 1 May 11 12:58:51.416: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020c60f0 exit status 1 true [0xc00053f318 0xc00053f368 0xc00053f468] [0xc00053f318 0xc00053f368 0xc00053f468] [0xc00053f350 0xc00053f3c0] [0xba70e0 0xba70e0] 0xc002cf2840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 12:59:01.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 12:59:01.517: INFO: rc: 1 May 11 12:59:01.517: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c50a80 exit status 1 true [0xc0009d1030 0xc0009d10c8 0xc0009d1128] [0xc0009d1030 0xc0009d10c8 0xc0009d1128] [0xc0009d1090 0xc0009d1108] [0xba70e0 0xba70e0] 0xc001e9b560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 12:59:11.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 12:59:11.630: INFO: rc: 1 May 11 12:59:11.630: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020c61e0 exit status 1 true [0xc00053f4c8 0xc00053f508 0xc00053f5e0] [0xc00053f4c8 0xc00053f508 0xc00053f5e0] [0xc00053f4f8 0xc00053f5c8] [0xba70e0 0xba70e0] 0xc002cf2c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 12:59:21.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 12:59:21.726: INFO: rc: 1 May 11 12:59:21.726: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001950d20 exit status 1 true [0xc002070490 0xc0020704d8 0xc002070550] [0xc002070490 0xc0020704d8 0xc002070550] [0xc0020704b8 0xc002070520] [0xba70e0 0xba70e0] 0xc002e192c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 12:59:31.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 12:59:31.821: INFO: rc: 1 May 11 12:59:31.821: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00253ccf0 exit status 1 true [0xc002cea530 0xc002cea548 0xc002cea560] [0xc002cea530 0xc002cea548 0xc002cea560] [0xc002cea540 0xc002cea558] [0xba70e0 0xba70e0] 0xc001d5db00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 12:59:41.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 12:59:41.932: INFO: rc: 1 May 11 12:59:41.932: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0020c62d0 exit status 1 true [0xc00053f678 0xc00053f718 0xc00053f770] [0xc00053f678 0xc00053f718 0xc00053f770] [0xc00053f710 0xc00053f730] [0xba70e0 0xba70e0] 0xc002cf2f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 12:59:51.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 12:59:52.036: INFO: rc: 1 May 11 12:59:52.036: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001c50bd0 exit status 1 true [0xc0009d1158 0xc0009d11e8 0xc0009d1248] [0xc0009d1158 0xc0009d11e8 0xc0009d1248] [0xc0009d11d8 0xc0009d1220] [0xba70e0 0xba70e0] 0xc001e9baa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:00:02.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:00:02.135: INFO: rc: 1 May 11 13:00:02.135: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0014ea090 exit status 1 true [0xc002cea010 0xc002cea028 0xc002cea068] [0xc002cea010 0xc002cea028 0xc002cea068] [0xc002cea020 0xc002cea048] [0xba70e0 0xba70e0] 0xc002d66240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:00:12.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:00:12.234: INFO: rc: 1 May 11 13:00:12.234: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001253950 exit status 1 true [0xc00053e3b8 0xc00053eea8 0xc00053efc0] [0xc00053e3b8 0xc00053eea8 0xc00053efc0] [0xc00053ed88 0xc00053efa8] [0xba70e0 0xba70e0] 0xc002aac4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:00:22.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:00:22.340: INFO: rc: 1 May 11 13:00:22.340: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001253a10 exit status 1 true [0xc00053f020 0xc00053f0f0 0xc00053f330] [0xc00053f020 0xc00053f0f0 0xc00053f330] [0xc00053f068 0xc00053f318] [0xba70e0 0xba70e0] 0xc002aac840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:00:32.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:00:32.436: INFO: rc: 1 May 11 13:00:32.436: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002dbe090 exit status 1 true [0xc002070000 0xc002070048 0xc002070090] [0xc002070000 0xc002070048 0xc002070090] [0xc002070040 0xc002070078] [0xba70e0 0xba70e0] 0xc001d5d020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:00:42.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:00:42.530: INFO: rc: 1 May 11 13:00:42.530: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002dbe180 exit status 1 true [0xc0020700a0 0xc002070100 0xc002070140] [0xc0020700a0 0xc002070100 0xc002070140] [0xc0020700c8 0xc002070120] [0xba70e0 0xba70e0] 0xc001d5d800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:00:52.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:00:52.625: INFO: rc: 1 May 11 13:00:52.625: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0014ea150 exit status 1 true [0xc002cea078 0xc002cea0b0 0xc002cea0f8] [0xc002cea078 0xc002cea0b0 0xc002cea0f8] [0xc002cea098 0xc002cea0e0] [0xba70e0 0xba70e0] 0xc002d66540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:01:02.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:01:02.716: INFO: rc: 1 May 11 13:01:02.716: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001253ad0 exit status 1 true [0xc00053f350 0xc00053f3c0 0xc00053f4d8] [0xc00053f350 0xc00053f3c0 0xc00053f4d8] [0xc00053f388 0xc00053f4c8] [0xba70e0 0xba70e0] 0xc002aacc00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:01:12.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:01:12.831: INFO: rc: 1 May 11 13:01:12.831: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af42a0 exit status 1 true [0xc0009d0070 0xc0009d0240 0xc0009d0330] [0xc0009d0070 0xc0009d0240 0xc0009d0330] [0xc0009d00a8 0xc0009d02e0] [0xba70e0 0xba70e0] 0xc002cf2300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:01:22.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:01:22.928: INFO: rc: 1 May 11 13:01:22.928: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af43c0 exit status 1 true [0xc0009d0358 0xc0009d0578 0xc0009d07b0] [0xc0009d0358 0xc0009d0578 0xc0009d07b0] [0xc0009d0400 0xc0009d0780] [0xba70e0 0xba70e0] 0xc002cf2660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:01:32.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:01:33.029: INFO: rc: 1 May 11 13:01:33.029: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001253bc0 exit status 1 true [0xc00053f4f8 0xc00053f5c8 0xc00053f6e0] [0xc00053f4f8 0xc00053f5c8 0xc00053f6e0] [0xc00053f578 0xc00053f678] [0xba70e0 0xba70e0] 0xc002aacfc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:01:43.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:01:43.114: INFO: rc: 1 May 11 13:01:43.114: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af44e0 exit status 1 true [0xc0009d0858 0xc0009d08e0 0xc0009d0a30] [0xc0009d0858 0xc0009d08e0 0xc0009d0a30] [0xc0009d08c0 0xc0009d09d0] [0xba70e0 0xba70e0] 0xc002cf29c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:01:53.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:01:53.223: INFO: rc: 1 May 11 13:01:53.223: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0014ea270 exit status 1 true [0xc002cea108 0xc002cea130 0xc002cea168] [0xc002cea108 0xc002cea130 0xc002cea168] [0xc002cea128 0xc002cea148] [0xba70e0 0xba70e0] 0xc002d66900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:02:03.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:02:03.329: INFO: rc: 1 May 11 13:02:03.329: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001253980 exit status 1 true [0xc00053ed40 0xc00053ef88 0xc00053f020] [0xc00053ed40 0xc00053ef88 0xc00053f020] [0xc00053eea8 0xc00053efc0] [0xba70e0 0xba70e0] 0xc002aac4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:02:13.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:02:13.431: INFO: rc: 1 May 11 13:02:13.432: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc000af4090 exit status 1 true [0xc0009d0070 0xc0009d0240 0xc0009d0330] [0xc0009d0070 0xc0009d0240 0xc0009d0330] [0xc0009d00a8 0xc0009d02e0] [0xba70e0 0xba70e0] 0xc002cf2300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:02:23.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:02:23.522: INFO: rc: 1 May 11 13:02:23.523: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002dbe0f0 exit status 1 true [0xc002070000 0xc002070048 0xc002070090] [0xc002070000 0xc002070048 0xc002070090] [0xc002070040 0xc002070078] [0xba70e0 0xba70e0] 0xc001d5d020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:02:33.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:02:33.619: INFO: rc: 1 May 11 13:02:33.619: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0014ea0f0 exit status 1 true [0xc002cea008 0xc002cea020 0xc002cea048] [0xc002cea008 0xc002cea020 0xc002cea048] [0xc002cea018 0xc002cea030] [0xba70e0 0xba70e0] 0xc002d66240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:02:43.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:02:43.714: INFO: rc: 1 May 11 13:02:43.714: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0014ea210 exit status 1 true [0xc002cea068 0xc002cea098 0xc002cea0e0] [0xc002cea068 0xc002cea098 0xc002cea0e0] [0xc002cea090 0xc002cea0c0] [0xba70e0 0xba70e0] 0xc002d66540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:02:53.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:02:53.814: INFO: rc: 1 May 11 13:02:53.814: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0014ea300 exit status 1 true [0xc002cea0f8 0xc002cea128 0xc002cea148] [0xc002cea0f8 0xc002cea128 0xc002cea148] [0xc002cea120 0xc002cea138] [0xba70e0 0xba70e0] 0xc002d66900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:03:03.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:03:03.914: INFO: rc: 1 May 11 13:03:03.914: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002dbe1e0 exit status 1 true [0xc0020700a0 0xc002070100 0xc002070140] [0xc0020700a0 0xc002070100 0xc002070140] [0xc0020700c8 0xc002070120] [0xba70e0 0xba70e0] 0xc001d5d800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:03:13.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:03:14.007: INFO: rc: 1 May 11 13:03:14.007: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001253b00 exit status 1 true [0xc00053f030 0xc00053f238 0xc00053f350] [0xc00053f030 0xc00053f238 0xc00053f350] [0xc00053f0f0 0xc00053f330] [0xba70e0 0xba70e0] 0xc002aac840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:03:24.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:03:24.093: INFO: rc: 1 May 11 13:03:24.093: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001253c50 exit status 1 true [0xc00053f368 0xc00053f468 0xc00053f4f8] [0xc00053f368 0xc00053f468 0xc00053f4f8] [0xc00053f3c0 0xc00053f4d8] [0xba70e0 0xba70e0] 0xc002aacc00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 11 13:03:34.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9444 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 13:03:34.174: INFO: rc: 1 May 11 13:03:34.174: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: May 11 13:03:34.174: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 11 13:03:34.183: INFO: Deleting all statefulset in ns statefulset-9444 May 11 13:03:34.337: INFO: Scaling statefulset ss to 0 May 11 13:03:34.347: INFO: Waiting for statefulset status.replicas updated to 0 May 11 13:03:34.348: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:03:34.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9444" for this suite. May 11 13:03:40.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:03:40.643: INFO: namespace statefulset-9444 deletion completed in 6.231375518s • [SLOW TEST:371.740 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:03:40.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-f8bee344-8d04-46a4-9e59-5e41587eebc4 STEP: Creating a pod to test consume configMaps May 11 13:03:42.034: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9034a591-4bb0-4033-873b-10116b6b560b" in namespace "projected-3825" to be "success or failure" May 11 13:03:42.086: INFO: Pod "pod-projected-configmaps-9034a591-4bb0-4033-873b-10116b6b560b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.464921ms May 11 13:03:44.089: INFO: Pod "pod-projected-configmaps-9034a591-4bb0-4033-873b-10116b6b560b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055305922s May 11 13:03:46.158: INFO: Pod "pod-projected-configmaps-9034a591-4bb0-4033-873b-10116b6b560b": Phase="Running", Reason="", readiness=true. Elapsed: 4.123988403s May 11 13:03:48.162: INFO: Pod "pod-projected-configmaps-9034a591-4bb0-4033-873b-10116b6b560b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127631448s STEP: Saw pod success May 11 13:03:48.162: INFO: Pod "pod-projected-configmaps-9034a591-4bb0-4033-873b-10116b6b560b" satisfied condition "success or failure" May 11 13:03:48.164: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-9034a591-4bb0-4033-873b-10116b6b560b container projected-configmap-volume-test: STEP: delete the pod May 11 13:03:48.259: INFO: Waiting for pod pod-projected-configmaps-9034a591-4bb0-4033-873b-10116b6b560b to disappear May 11 13:03:48.293: INFO: Pod pod-projected-configmaps-9034a591-4bb0-4033-873b-10116b6b560b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:03:48.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3825" for this suite. May 11 13:03:54.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:03:54.363: INFO: namespace projected-3825 deletion completed in 6.066821339s • [SLOW TEST:13.719 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:03:54.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-00ce6e4b-9d84-46fe-8a84-50be9929a7d6 STEP: Creating a pod to test consume secrets May 11 13:03:54.811: INFO: Waiting up to 5m0s for pod "pod-secrets-ac9c0e32-af6d-46e7-820b-f92937bd0d0a" in namespace "secrets-7249" to be "success or failure" May 11 13:03:54.837: INFO: Pod "pod-secrets-ac9c0e32-af6d-46e7-820b-f92937bd0d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 25.821741ms May 11 13:03:56.841: INFO: Pod "pod-secrets-ac9c0e32-af6d-46e7-820b-f92937bd0d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029643363s May 11 13:03:58.846: INFO: Pod "pod-secrets-ac9c0e32-af6d-46e7-820b-f92937bd0d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034564513s May 11 13:04:00.849: INFO: Pod "pod-secrets-ac9c0e32-af6d-46e7-820b-f92937bd0d0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037537701s STEP: Saw pod success May 11 13:04:00.849: INFO: Pod "pod-secrets-ac9c0e32-af6d-46e7-820b-f92937bd0d0a" satisfied condition "success or failure" May 11 13:04:00.851: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-ac9c0e32-af6d-46e7-820b-f92937bd0d0a container secret-volume-test: STEP: delete the pod May 11 13:04:00.918: INFO: Waiting for pod pod-secrets-ac9c0e32-af6d-46e7-820b-f92937bd0d0a to disappear May 11 13:04:00.958: INFO: Pod pod-secrets-ac9c0e32-af6d-46e7-820b-f92937bd0d0a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:04:00.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7249" for this suite. May 11 13:04:06.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:04:07.064: INFO: namespace secrets-7249 deletion completed in 6.101550144s • [SLOW TEST:12.700 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:04:07.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 13:04:07.209: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1084235-237d-4884-abfc-e91d16ff3df7" in namespace "downward-api-3076" to be "success or failure" May 11 13:04:07.216: INFO: Pod "downwardapi-volume-b1084235-237d-4884-abfc-e91d16ff3df7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.481221ms May 11 13:04:09.276: INFO: Pod "downwardapi-volume-b1084235-237d-4884-abfc-e91d16ff3df7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067221825s May 11 13:04:11.280: INFO: Pod "downwardapi-volume-b1084235-237d-4884-abfc-e91d16ff3df7": Phase="Running", Reason="", readiness=true. Elapsed: 4.070910583s May 11 13:04:13.284: INFO: Pod "downwardapi-volume-b1084235-237d-4884-abfc-e91d16ff3df7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074658574s STEP: Saw pod success May 11 13:04:13.284: INFO: Pod "downwardapi-volume-b1084235-237d-4884-abfc-e91d16ff3df7" satisfied condition "success or failure" May 11 13:04:13.311: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b1084235-237d-4884-abfc-e91d16ff3df7 container client-container: STEP: delete the pod May 11 13:04:13.439: INFO: Waiting for pod downwardapi-volume-b1084235-237d-4884-abfc-e91d16ff3df7 to disappear May 11 13:04:13.480: INFO: Pod downwardapi-volume-b1084235-237d-4884-abfc-e91d16ff3df7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:04:13.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3076" for this suite. May 11 13:04:21.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:04:21.584: INFO: namespace downward-api-3076 deletion completed in 8.100330568s • [SLOW TEST:14.519 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:04:21.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 13:04:29.928: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:29.953: INFO: Pod pod-with-prestop-exec-hook still exists May 11 13:04:31.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:31.957: INFO: Pod pod-with-prestop-exec-hook still exists May 11 13:04:33.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:33.957: INFO: Pod pod-with-prestop-exec-hook still exists May 11 13:04:35.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:35.956: INFO: Pod pod-with-prestop-exec-hook still exists May 11 13:04:37.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:37.957: INFO: Pod pod-with-prestop-exec-hook still exists May 11 13:04:39.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:39.957: INFO: Pod pod-with-prestop-exec-hook still exists May 11 13:04:41.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:41.957: INFO: Pod pod-with-prestop-exec-hook still exists May 11 13:04:43.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:43.958: INFO: Pod pod-with-prestop-exec-hook still exists May 11 13:04:45.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:45.957: INFO: Pod pod-with-prestop-exec-hook still exists May 11 13:04:47.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:47.957: INFO: Pod pod-with-prestop-exec-hook still exists May 11 13:04:49.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:49.958: INFO: Pod pod-with-prestop-exec-hook still exists May 11 13:04:51.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:51.957: INFO: Pod pod-with-prestop-exec-hook still exists May 11 13:04:53.954: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 13:04:53.957: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:04:53.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3248" for this suite. May 11 13:05:15.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:05:16.053: INFO: namespace container-lifecycle-hook-3248 deletion completed in 22.087346271s • [SLOW TEST:54.469 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:05:16.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-455fe1bb-fd6c-4f9c-91f1-3ea8f96c5250 STEP: Creating a pod to test consume configMaps May 11 13:05:16.159: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc96ebf2-6e0c-4a80-9a1b-5b671e5afde1" in namespace "configmap-8718" to be "success or failure" May 11 13:05:16.163: INFO: Pod "pod-configmaps-dc96ebf2-6e0c-4a80-9a1b-5b671e5afde1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356376ms May 11 13:05:18.302: INFO: Pod "pod-configmaps-dc96ebf2-6e0c-4a80-9a1b-5b671e5afde1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142585198s May 11 13:05:20.380: INFO: Pod "pod-configmaps-dc96ebf2-6e0c-4a80-9a1b-5b671e5afde1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220716774s May 11 13:05:22.383: INFO: Pod "pod-configmaps-dc96ebf2-6e0c-4a80-9a1b-5b671e5afde1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.224480563s STEP: Saw pod success May 11 13:05:22.383: INFO: Pod "pod-configmaps-dc96ebf2-6e0c-4a80-9a1b-5b671e5afde1" satisfied condition "success or failure" May 11 13:05:22.386: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-dc96ebf2-6e0c-4a80-9a1b-5b671e5afde1 container configmap-volume-test: STEP: delete the pod May 11 13:05:22.405: INFO: Waiting for pod pod-configmaps-dc96ebf2-6e0c-4a80-9a1b-5b671e5afde1 to disappear May 11 13:05:22.410: INFO: Pod pod-configmaps-dc96ebf2-6e0c-4a80-9a1b-5b671e5afde1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:05:22.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8718" for this suite. May 11 13:05:28.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:05:28.468: INFO: namespace configmap-8718 deletion completed in 6.056624591s • [SLOW TEST:12.415 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:05:28.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 13:05:28.507: INFO: Creating deployment "nginx-deployment" May 11 13:05:28.536: INFO: Waiting for observed generation 1 May 11 13:05:30.811: INFO: Waiting for all required pods to come up May 11 13:05:30.815: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 11 13:05:42.823: INFO: Waiting for deployment "nginx-deployment" to complete May 11 13:05:42.829: INFO: Updating deployment "nginx-deployment" with a non-existent image May 11 13:05:42.836: INFO: Updating deployment nginx-deployment May 11 13:05:42.836: INFO: Waiting for observed generation 2 May 11 13:05:45.040: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 11 13:05:45.043: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 11 13:05:45.045: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 11 13:05:45.052: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 11 13:05:45.052: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 11 13:05:45.054: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 11 13:05:45.058: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 11 13:05:45.058: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 11 13:05:45.063: INFO: Updating deployment nginx-deployment May 11 13:05:45.063: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 11 13:05:45.607: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 11 13:05:46.071: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 11 13:05:46.131: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6203,SelfLink:/apis/apps/v1/namespaces/deployment-6203/deployments/nginx-deployment,UID:a9c977bd-b33e-4719-b5d0-0612e39b256d,ResourceVersion:10246214,Generation:3,CreationTimestamp:2020-05-11 13:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-11 13:05:44 +0000 UTC 2020-05-11 13:05:28 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-05-11 13:05:45 +0000 UTC 2020-05-11 13:05:45 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 11 13:05:46.310: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6203,SelfLink:/apis/apps/v1/namespaces/deployment-6203/replicasets/nginx-deployment-55fb7cb77f,UID:2fb5f880-ccde-42e1-a6ee-50608f54490c,ResourceVersion:10246248,Generation:3,CreationTimestamp:2020-05-11 13:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a9c977bd-b33e-4719-b5d0-0612e39b256d 0xc001f2d947 0xc001f2d948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 13:05:46.310: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 11 13:05:46.310: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6203,SelfLink:/apis/apps/v1/namespaces/deployment-6203/replicasets/nginx-deployment-7b8c6f4498,UID:9ac76b7f-b7fd-4bcc-a775-bf3d4346e235,ResourceVersion:10246246,Generation:3,CreationTimestamp:2020-05-11 13:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a9c977bd-b33e-4719-b5d0-0612e39b256d 0xc001f2da17 0xc001f2da18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 11 13:05:46.388: INFO: Pod "nginx-deployment-55fb7cb77f-25pb8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-25pb8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-25pb8,UID:f562e926-0992-4272-9129-616cc54a8065,ResourceVersion:10246250,Generation:0,CreationTimestamp:2020-05-11 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc001d43c90 0xc001d43c91}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d43d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d43d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.388: INFO: Pod "nginx-deployment-55fb7cb77f-4dgbm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4dgbm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-4dgbm,UID:ba4affdc-5061-407d-998a-e82dfc38d515,ResourceVersion:10246236,Generation:0,CreationTimestamp:2020-05-11 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc001d43db7 0xc001d43db8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d43e30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d43e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.388: INFO: Pod "nginx-deployment-55fb7cb77f-b8f4m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b8f4m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-b8f4m,UID:3a243825-423c-4e80-b0da-693d5b89a584,ResourceVersion:10246169,Generation:0,CreationTimestamp:2020-05-11 13:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc001d43ed7 0xc001d43ed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d43f50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d43f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-11 13:05:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.388: INFO: Pod "nginx-deployment-55fb7cb77f-bf979" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bf979,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-bf979,UID:444a77d7-3d07-423f-b6ae-9fd4420ac3e5,ResourceVersion:10246185,Generation:0,CreationTimestamp:2020-05-11 13:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc002b88047 0xc002b88048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b880c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b880e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-11 13:05:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.388: INFO: Pod "nginx-deployment-55fb7cb77f-bgbpg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bgbpg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-bgbpg,UID:16966b7f-d4d0-460a-8288-4b541d7fafb3,ResourceVersion:10246218,Generation:0,CreationTimestamp:2020-05-11 13:05:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc002b881b7 0xc002b881b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b88230} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b88250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.388: INFO: Pod "nginx-deployment-55fb7cb77f-ctbfr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ctbfr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-ctbfr,UID:dd9470e8-97ae-4226-94a3-6a1a39649308,ResourceVersion:10246219,Generation:0,CreationTimestamp:2020-05-11 13:05:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc002b882d7 0xc002b882d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b88350} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b88370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.389: INFO: Pod "nginx-deployment-55fb7cb77f-jxc85" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jxc85,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-jxc85,UID:8ca7bc19-0a22-4792-a8bf-3d4235a653c9,ResourceVersion:10246205,Generation:0,CreationTimestamp:2020-05-11 13:05:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc002b883f7 0xc002b883f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b88470} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b88490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:45 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.389: INFO: Pod "nginx-deployment-55fb7cb77f-sqfmw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sqfmw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-sqfmw,UID:567ec04a-e69c-44e3-ab54-9d27e9a04fed,ResourceVersion:10246238,Generation:0,CreationTimestamp:2020-05-11 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc002b88517 0xc002b88518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b88590} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b885b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.389: INFO: Pod "nginx-deployment-55fb7cb77f-t5j4d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t5j4d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-t5j4d,UID:5c134480-ed84-4490-a848-9672aa9d2be9,ResourceVersion:10246239,Generation:0,CreationTimestamp:2020-05-11 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc002b88637 0xc002b88638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b886b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b886d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.389: INFO: Pod "nginx-deployment-55fb7cb77f-v25vz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v25vz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-v25vz,UID:0c70e780-8221-4f6a-a7b6-6683586ec561,ResourceVersion:10246240,Generation:0,CreationTimestamp:2020-05-11 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc002b88757 0xc002b88758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b887d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b887f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.389: INFO: Pod "nginx-deployment-55fb7cb77f-wgv57" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wgv57,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-wgv57,UID:9ba2d886-7cf7-4076-8523-6ec90aa03fae,ResourceVersion:10246159,Generation:0,CreationTimestamp:2020-05-11 13:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc002b88877 0xc002b88878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b888f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b88910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-11 13:05:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.389: INFO: Pod "nginx-deployment-55fb7cb77f-zjc8w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zjc8w,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-zjc8w,UID:bbd1469b-1f1f-4790-a763-4568778b4718,ResourceVersion:10246188,Generation:0,CreationTimestamp:2020-05-11 13:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc002b889e7 0xc002b889e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b88a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b88a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-11 13:05:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.389: INFO: Pod "nginx-deployment-55fb7cb77f-zl2f6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zl2f6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-55fb7cb77f-zl2f6,UID:5498947d-b8f9-4e9d-802a-1ce543d2473b,ResourceVersion:10246157,Generation:0,CreationTimestamp:2020-05-11 13:05:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2fb5f880-ccde-42e1-a6ee-50608f54490c 0xc002b88b57 0xc002b88b58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b88bd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b88bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-11 13:05:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.390: INFO: Pod "nginx-deployment-7b8c6f4498-4bc96" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4bc96,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-4bc96,UID:b5f06b09-3e84-4cbe-8374-09fdf37ff1d5,ResourceVersion:10246100,Generation:0,CreationTimestamp:2020-05-11 13:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b88cc7 0xc002b88cc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b88d40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b88d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.212,StartTime:2020-05-11 13:05:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 13:05:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f949f025077ac5f817b2ed29654b30fe1cb738b654794578456a773067068680}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.390: INFO: Pod "nginx-deployment-7b8c6f4498-4gzm2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4gzm2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-4gzm2,UID:1df95130-4c30-452e-9446-6aef26729e79,ResourceVersion:10246068,Generation:0,CreationTimestamp:2020-05-11 13:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b88e37 0xc002b88e38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b88eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b88ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.126,StartTime:2020-05-11 13:05:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 13:05:34 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1e3b222ade9eac2405bc6c64920e4170d62ddcffa45962390c09e9b4a74434fd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.390: INFO: Pod "nginx-deployment-7b8c6f4498-6clhq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6clhq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-6clhq,UID:0338b22d-8063-4677-9cd6-b9759e620557,ResourceVersion:10246116,Generation:0,CreationTimestamp:2020-05-11 13:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b88fa7 0xc002b88fa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b89020} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b89040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.211,StartTime:2020-05-11 13:05:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 13:05:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e535b315622bd0bff1969de55939a0f2d90a872671e06cd096ec850e3e70577b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.390: INFO: Pod "nginx-deployment-7b8c6f4498-7lqjp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7lqjp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-7lqjp,UID:754c78fa-22bd-424d-a39f-6cc416f857ad,ResourceVersion:10246119,Generation:0,CreationTimestamp:2020-05-11 13:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b89117 0xc002b89118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b89190} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b891b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.130,StartTime:2020-05-11 13:05:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 13:05:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8caf1d84f1e42310186d13a80efa5e9bdf352fc1af090bebc779e8ffab8467ce}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.390: INFO: Pod "nginx-deployment-7b8c6f4498-9wcx2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9wcx2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-9wcx2,UID:81b75e88-2847-4a66-a928-fc8972cc42ed,ResourceVersion:10246244,Generation:0,CreationTimestamp:2020-05-11 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b89287 0xc002b89288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b89300} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b89320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.390: INFO: Pod "nginx-deployment-7b8c6f4498-bgq8n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bgq8n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-bgq8n,UID:5e0ce1c2-0fe0-409a-927b-e54cb1dc3bf4,ResourceVersion:10246243,Generation:0,CreationTimestamp:2020-05-11 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b893a7 0xc002b893a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b89420} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b89440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.390: INFO: Pod "nginx-deployment-7b8c6f4498-cmjgv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cmjgv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-cmjgv,UID:f6dfc550-9030-4cc3-a093-924f72440fad,ResourceVersion:10246223,Generation:0,CreationTimestamp:2020-05-11 13:05:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b894c7 0xc002b894c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b89540} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b89560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.390: INFO: Pod "nginx-deployment-7b8c6f4498-dzspd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dzspd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-dzspd,UID:6582b878-6351-412d-99b3-6f2de425b3f3,ResourceVersion:10246222,Generation:0,CreationTimestamp:2020-05-11 13:05:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b895e7 0xc002b895e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b89660} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b89680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.390: INFO: Pod "nginx-deployment-7b8c6f4498-gshfs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gshfs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-gshfs,UID:163a9a2a-ab7f-4126-bf44-b0779afb74bf,ResourceVersion:10246220,Generation:0,CreationTimestamp:2020-05-11 13:05:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b89707 0xc002b89708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b89780} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b897a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:45 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-11 13:05:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.391: INFO: Pod "nginx-deployment-7b8c6f4498-hmhw8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hmhw8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-hmhw8,UID:5141656b-fb79-4067-a52e-d69308f1ba03,ResourceVersion:10246122,Generation:0,CreationTimestamp:2020-05-11 13:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b89867 0xc002b89868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b898e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b89900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.214,StartTime:2020-05-11 13:05:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 13:05:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bfc37ccc550d2ae409506f7d2b9da73f844987755500c1691334102998f6c853}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.391: INFO: Pod "nginx-deployment-7b8c6f4498-j8qlf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j8qlf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-j8qlf,UID:f6e56b76-ac6c-4271-b003-05f34a9337fe,ResourceVersion:10246242,Generation:0,CreationTimestamp:2020-05-11 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b899d7 0xc002b899d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b89a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b89a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.391: INFO: Pod "nginx-deployment-7b8c6f4498-m5gcp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-m5gcp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-m5gcp,UID:35b36963-d448-4126-9634-2d3f97f40bdc,ResourceVersion:10246101,Generation:0,CreationTimestamp:2020-05-11 13:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b89af7 0xc002b89af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b89b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b89b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.129,StartTime:2020-05-11 13:05:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 13:05:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://55147d5e984a4bf4863f23fa9d09922c4fb6a691ad75104fc03b1b83a4c1d3cf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.391: INFO: Pod "nginx-deployment-7b8c6f4498-mhdfm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mhdfm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-mhdfm,UID:0cab93e5-5e6f-4f47-88a5-d6919169926d,ResourceVersion:10246209,Generation:0,CreationTimestamp:2020-05-11 13:05:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b89c67 0xc002b89c68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b89ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b89d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:45 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.391: INFO: Pod "nginx-deployment-7b8c6f4498-ns6bz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ns6bz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-ns6bz,UID:cf217d61-103f-4f32-b1f3-bf2617e2e792,ResourceVersion:10246221,Generation:0,CreationTimestamp:2020-05-11 13:05:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b89d87 0xc002b89d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b89e00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b89e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.391: INFO: Pod "nginx-deployment-7b8c6f4498-q9vkd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q9vkd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-q9vkd,UID:67ec1d8e-34b5-41d5-889d-891fbb86dd68,ResourceVersion:10246224,Generation:0,CreationTimestamp:2020-05-11 13:05:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b89ea7 0xc002b89ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b89f20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b89f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.391: INFO: Pod "nginx-deployment-7b8c6f4498-r5b9p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r5b9p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-r5b9p,UID:8359b90a-7618-49d1-9069-f6a6073ca216,ResourceVersion:10246211,Generation:0,CreationTimestamp:2020-05-11 13:05:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002b89fc7 0xc002b89fc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ba8040} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ba8060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:45 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.391: INFO: Pod "nginx-deployment-7b8c6f4498-t997f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t997f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-t997f,UID:7ca88c47-1241-4a67-baa5-9b1306ecabc3,ResourceVersion:10246245,Generation:0,CreationTimestamp:2020-05-11 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002ba80e7 0xc002ba80e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ba8160} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ba8180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.391: INFO: Pod "nginx-deployment-7b8c6f4498-vr99j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vr99j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-vr99j,UID:a9c724ef-d3ce-42e3-b83f-257c0fc2740f,ResourceVersion:10246241,Generation:0,CreationTimestamp:2020-05-11 13:05:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002ba8207 0xc002ba8208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ba8280} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ba82a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.391: INFO: Pod "nginx-deployment-7b8c6f4498-zdqcw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zdqcw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-zdqcw,UID:52062882-e2ab-4479-b05d-b521c960eb86,ResourceVersion:10246093,Generation:0,CreationTimestamp:2020-05-11 13:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002ba8327 0xc002ba8328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ba83a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ba83c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.128,StartTime:2020-05-11 13:05:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 13:05:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cdcf178b71c2e915fb223c2044b3f593431a066c9d110f1eee810e7b7d080801}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 13:05:46.391: INFO: Pod "nginx-deployment-7b8c6f4498-zh7qm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zh7qm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6203,SelfLink:/api/v1/namespaces/deployment-6203/pods/nginx-deployment-7b8c6f4498-zh7qm,UID:fc8d8db2-b6b2-4784-a1c5-5523f7a64757,ResourceVersion:10246081,Generation:0,CreationTimestamp:2020-05-11 13:05:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 9ac76b7f-b7fd-4bcc-a775-bf3d4346e235 0xc002ba8497 0xc002ba8498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-58wbv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-58wbv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-58wbv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ba8510} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ba8530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:05:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.127,StartTime:2020-05-11 13:05:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 13:05:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1faedf2d476f5cf30fda09f9dc557ca2b740ad97b8783638a97cd9a48753ca1e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:05:46.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6203" for this suite. May 11 13:06:08.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:06:09.034: INFO: namespace deployment-6203 deletion completed in 22.60393528s • [SLOW TEST:40.565 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:06:09.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 13:06:09.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-934f5cc0-4532-49cc-af1e-c42c5544a5ef" in namespace "projected-3340" to be "success or failure" May 11 13:06:09.671: INFO: Pod "downwardapi-volume-934f5cc0-4532-49cc-af1e-c42c5544a5ef": Phase="Pending", Reason="", readiness=false. Elapsed: 154.386783ms May 11 13:06:11.675: INFO: Pod "downwardapi-volume-934f5cc0-4532-49cc-af1e-c42c5544a5ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158906846s May 11 13:06:13.685: INFO: Pod "downwardapi-volume-934f5cc0-4532-49cc-af1e-c42c5544a5ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168953461s STEP: Saw pod success May 11 13:06:13.685: INFO: Pod "downwardapi-volume-934f5cc0-4532-49cc-af1e-c42c5544a5ef" satisfied condition "success or failure" May 11 13:06:13.687: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-934f5cc0-4532-49cc-af1e-c42c5544a5ef container client-container: STEP: delete the pod May 11 13:06:13.952: INFO: Waiting for pod downwardapi-volume-934f5cc0-4532-49cc-af1e-c42c5544a5ef to disappear May 11 13:06:14.064: INFO: Pod downwardapi-volume-934f5cc0-4532-49cc-af1e-c42c5544a5ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:06:14.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3340" for this suite. May 11 13:06:20.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:06:20.163: INFO: namespace projected-3340 deletion completed in 6.09543164s • [SLOW TEST:11.129 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:06:20.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 13:06:20.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8616' May 11 13:06:23.143: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 13:06:23.143: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 11 13:06:23.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8616' May 11 13:06:23.278: INFO: stderr: "" May 11 13:06:23.278: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:06:23.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8616" for this suite. May 11 13:06:29.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:06:29.346: INFO: namespace kubectl-8616 deletion completed in 6.064829649s • [SLOW TEST:9.183 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:06:29.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 13:06:29.384: INFO: Waiting up to 5m0s for pod "pod-653aa0c1-6934-441c-bdea-73a8cd9ce080" in namespace "emptydir-5662" to be "success or failure" May 11 13:06:29.403: INFO: Pod "pod-653aa0c1-6934-441c-bdea-73a8cd9ce080": Phase="Pending", Reason="", readiness=false. Elapsed: 18.920443ms May 11 13:06:31.407: INFO: Pod "pod-653aa0c1-6934-441c-bdea-73a8cd9ce080": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023243821s May 11 13:06:33.411: INFO: Pod "pod-653aa0c1-6934-441c-bdea-73a8cd9ce080": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027003117s STEP: Saw pod success May 11 13:06:33.411: INFO: Pod "pod-653aa0c1-6934-441c-bdea-73a8cd9ce080" satisfied condition "success or failure" May 11 13:06:33.413: INFO: Trying to get logs from node iruya-worker2 pod pod-653aa0c1-6934-441c-bdea-73a8cd9ce080 container test-container: STEP: delete the pod May 11 13:06:33.432: INFO: Waiting for pod pod-653aa0c1-6934-441c-bdea-73a8cd9ce080 to disappear May 11 13:06:33.450: INFO: Pod pod-653aa0c1-6934-441c-bdea-73a8cd9ce080 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:06:33.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5662" for this suite. May 11 13:06:39.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:06:39.522: INFO: namespace emptydir-5662 deletion completed in 6.068210101s • [SLOW TEST:10.176 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:06:39.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-8d3f1180-b695-4f30-bc5b-b853379c3102 in namespace container-probe-6182 May 11 13:06:46.251: INFO: Started pod busybox-8d3f1180-b695-4f30-bc5b-b853379c3102 in namespace container-probe-6182 STEP: checking the pod's current state and verifying that restartCount is present May 11 13:06:46.254: INFO: Initial restart count of pod busybox-8d3f1180-b695-4f30-bc5b-b853379c3102 is 0 May 11 13:07:40.542: INFO: Restart count of pod container-probe-6182/busybox-8d3f1180-b695-4f30-bc5b-b853379c3102 is now 1 (54.288053668s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:07:41.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6182" for this suite. May 11 13:07:48.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:07:48.550: INFO: namespace container-probe-6182 deletion completed in 6.861077487s • [SLOW TEST:69.028 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:07:48.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 11 13:07:48.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5359' May 11 13:07:48.947: INFO: stderr: "" May 11 13:07:48.947: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 11 13:07:49.952: INFO: Selector matched 1 pods for map[app:redis] May 11 13:07:49.953: INFO: Found 0 / 1 May 11 13:07:50.953: INFO: Selector matched 1 pods for map[app:redis] May 11 13:07:50.953: INFO: Found 0 / 1 May 11 13:07:52.052: INFO: Selector matched 1 pods for map[app:redis] May 11 13:07:52.052: INFO: Found 0 / 1 May 11 13:07:52.951: INFO: Selector matched 1 pods for map[app:redis] May 11 13:07:52.951: INFO: Found 0 / 1 May 11 13:07:53.952: INFO: Selector matched 1 pods for map[app:redis] May 11 13:07:53.952: INFO: Found 1 / 1 May 11 13:07:53.952: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 11 13:07:53.954: INFO: Selector matched 1 pods for map[app:redis] May 11 13:07:53.954: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 13:07:53.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-8s5tl --namespace=kubectl-5359 -p {"metadata":{"annotations":{"x":"y"}}}' May 11 13:07:54.056: INFO: stderr: "" May 11 13:07:54.056: INFO: stdout: "pod/redis-master-8s5tl patched\n" STEP: checking annotations May 11 13:07:54.078: INFO: Selector matched 1 pods for map[app:redis] May 11 13:07:54.078: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:07:54.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5359" for this suite. May 11 13:08:16.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:08:16.176: INFO: namespace kubectl-5359 deletion completed in 22.095571978s • [SLOW TEST:27.625 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:08:16.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-6788/secret-test-92aa2f80-71d5-4857-9dd6-dbf513d2b3ef STEP: Creating a pod to test consume secrets May 11 13:08:16.284: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc4980f5-fe64-49c5-ad96-0e24e1fa669f" in namespace "secrets-6788" to be "success or failure" May 11 13:08:16.289: INFO: Pod "pod-configmaps-bc4980f5-fe64-49c5-ad96-0e24e1fa669f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380557ms May 11 13:08:18.297: INFO: Pod "pod-configmaps-bc4980f5-fe64-49c5-ad96-0e24e1fa669f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013085227s May 11 13:08:20.302: INFO: Pod "pod-configmaps-bc4980f5-fe64-49c5-ad96-0e24e1fa669f": Phase="Running", Reason="", readiness=true. Elapsed: 4.017579535s May 11 13:08:22.305: INFO: Pod "pod-configmaps-bc4980f5-fe64-49c5-ad96-0e24e1fa669f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020500545s STEP: Saw pod success May 11 13:08:22.305: INFO: Pod "pod-configmaps-bc4980f5-fe64-49c5-ad96-0e24e1fa669f" satisfied condition "success or failure" May 11 13:08:22.307: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-bc4980f5-fe64-49c5-ad96-0e24e1fa669f container env-test: STEP: delete the pod May 11 13:08:22.326: INFO: Waiting for pod pod-configmaps-bc4980f5-fe64-49c5-ad96-0e24e1fa669f to disappear May 11 13:08:22.355: INFO: Pod pod-configmaps-bc4980f5-fe64-49c5-ad96-0e24e1fa669f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:08:22.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6788" for this suite. May 11 13:08:28.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:08:28.543: INFO: namespace secrets-6788 deletion completed in 6.183851119s • [SLOW TEST:12.367 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:08:28.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0511 13:08:38.864541 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 13:08:38.864: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:08:38.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1775" for this suite. May 11 13:08:44.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:08:44.976: INFO: namespace gc-1775 deletion completed in 6.108843844s • [SLOW TEST:16.433 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:08:44.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 11 13:08:45.899: INFO: created pod pod-service-account-defaultsa May 11 13:08:45.899: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 11 13:08:45.906: INFO: created pod pod-service-account-mountsa May 11 13:08:45.906: INFO: pod pod-service-account-mountsa service account token volume mount: true May 11 13:08:45.942: INFO: created pod pod-service-account-nomountsa May 11 13:08:45.942: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 11 13:08:45.960: INFO: created pod pod-service-account-defaultsa-mountspec May 11 13:08:45.960: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 11 13:08:46.077: INFO: created pod pod-service-account-mountsa-mountspec May 11 13:08:46.077: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 11 13:08:46.128: INFO: created pod pod-service-account-nomountsa-mountspec May 11 13:08:46.128: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 11 13:08:46.226: INFO: created pod pod-service-account-defaultsa-nomountspec May 11 13:08:46.226: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 11 13:08:46.234: INFO: created pod pod-service-account-mountsa-nomountspec May 11 13:08:46.234: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 11 13:08:46.254: INFO: created pod pod-service-account-nomountsa-nomountspec May 11 13:08:46.254: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:08:46.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2336" for this suite. May 11 13:09:20.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:09:20.475: INFO: namespace svcaccounts-2336 deletion completed in 34.16345319s • [SLOW TEST:35.498 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:09:20.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 13:09:21.180: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"cf0121de-20c9-4ce3-8b70-4b53ff5f6322", Controller:(*bool)(0xc0021ccfda), BlockOwnerDeletion:(*bool)(0xc0021ccfdb)}} May 11 13:09:21.185: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"699ad5a7-16d6-4736-861e-8323eb57b54c", Controller:(*bool)(0xc001d4349a), BlockOwnerDeletion:(*bool)(0xc001d4349b)}} May 11 13:09:21.218: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ed4c88f5-4d49-4c81-b491-15292f3e7280", Controller:(*bool)(0xc001d4366a), BlockOwnerDeletion:(*bool)(0xc001d4366b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:09:26.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2030" for this suite. May 11 13:09:32.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:09:32.459: INFO: namespace gc-2030 deletion completed in 6.072969189s • [SLOW TEST:11.984 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:09:32.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:09:36.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9466" for this suite. May 11 13:09:43.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:09:43.091: INFO: namespace emptydir-wrapper-9466 deletion completed in 6.180229561s • [SLOW TEST:10.632 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:09:43.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-188/configmap-test-93cdb5b2-9cca-40f6-8352-37ea39be77a1 STEP: Creating a pod to test consume configMaps May 11 13:09:43.903: INFO: Waiting up to 5m0s for pod "pod-configmaps-d9096fa8-d448-490a-aae1-b1c0f4e79a01" in namespace "configmap-188" to be "success or failure" May 11 13:09:43.905: INFO: Pod "pod-configmaps-d9096fa8-d448-490a-aae1-b1c0f4e79a01": Phase="Pending", Reason="", readiness=false. Elapsed: 1.813452ms May 11 13:09:46.473: INFO: Pod "pod-configmaps-d9096fa8-d448-490a-aae1-b1c0f4e79a01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.570065941s May 11 13:09:48.521: INFO: Pod "pod-configmaps-d9096fa8-d448-490a-aae1-b1c0f4e79a01": Phase="Running", Reason="", readiness=true. Elapsed: 4.617492068s May 11 13:09:50.524: INFO: Pod "pod-configmaps-d9096fa8-d448-490a-aae1-b1c0f4e79a01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.621118736s STEP: Saw pod success May 11 13:09:50.524: INFO: Pod "pod-configmaps-d9096fa8-d448-490a-aae1-b1c0f4e79a01" satisfied condition "success or failure" May 11 13:09:50.527: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d9096fa8-d448-490a-aae1-b1c0f4e79a01 container env-test: STEP: delete the pod May 11 13:09:50.588: INFO: Waiting for pod pod-configmaps-d9096fa8-d448-490a-aae1-b1c0f4e79a01 to disappear May 11 13:09:50.596: INFO: Pod pod-configmaps-d9096fa8-d448-490a-aae1-b1c0f4e79a01 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:09:50.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-188" for this suite. May 11 13:09:56.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:09:56.704: INFO: namespace configmap-188 deletion completed in 6.105446755s • [SLOW TEST:13.614 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:09:56.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 11 13:09:57.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6509' May 11 13:09:57.528: INFO: stderr: "" May 11 13:09:57.528: INFO: stdout: "pod/pause created\n" May 11 13:09:57.528: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 11 13:09:57.528: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6509" to be "running and ready" May 11 13:09:57.543: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.600892ms May 11 13:09:59.547: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019291498s May 11 13:10:01.554: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.026542404s May 11 13:10:01.554: INFO: Pod "pause" satisfied condition "running and ready" May 11 13:10:01.554: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 11 13:10:01.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6509' May 11 13:10:01.655: INFO: stderr: "" May 11 13:10:01.655: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 11 13:10:01.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6509' May 11 13:10:01.759: INFO: stderr: "" May 11 13:10:01.759: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 11 13:10:01.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6509' May 11 13:10:01.863: INFO: stderr: "" May 11 13:10:01.863: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 11 13:10:01.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6509' May 11 13:10:01.960: INFO: stderr: "" May 11 13:10:01.960: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 11 13:10:01.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6509' May 11 13:10:02.094: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 13:10:02.094: INFO: stdout: "pod \"pause\" force deleted\n" May 11 13:10:02.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6509' May 11 13:10:02.277: INFO: stderr: "No resources found.\n" May 11 13:10:02.277: INFO: stdout: "" May 11 13:10:02.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6509 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 13:10:02.367: INFO: stderr: "" May 11 13:10:02.367: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:10:02.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6509" for this suite. May 11 13:10:08.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:10:08.553: INFO: namespace kubectl-6509 deletion completed in 6.150090522s • [SLOW TEST:11.849 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:10:08.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0511 13:10:39.242192 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 13:10:39.242: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:10:39.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9012" for this suite. May 11 13:10:45.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:10:45.411: INFO: namespace gc-9012 deletion completed in 6.165517691s • [SLOW TEST:36.857 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:10:45.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 11 13:10:52.587: INFO: Successfully updated pod "annotationupdateeeda6264-51d5-42ee-9c82-f6316f613ab6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:10:54.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5233" for this suite. May 11 13:11:16.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:11:16.726: INFO: namespace downward-api-5233 deletion completed in 22.079872964s • [SLOW TEST:31.314 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:11:16.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-4x2m STEP: Creating a pod to test atomic-volume-subpath May 11 13:11:16.845: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4x2m" in namespace "subpath-2710" to be "success or failure" May 11 13:11:16.864: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Pending", Reason="", readiness=false. Elapsed: 19.246432ms May 11 13:11:19.007: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162476137s May 11 13:11:21.011: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Running", Reason="", readiness=true. Elapsed: 4.166177182s May 11 13:11:23.014: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Running", Reason="", readiness=true. Elapsed: 6.169545207s May 11 13:11:25.018: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Running", Reason="", readiness=true. Elapsed: 8.173347577s May 11 13:11:27.021: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Running", Reason="", readiness=true. Elapsed: 10.176804915s May 11 13:11:29.030: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Running", Reason="", readiness=true. Elapsed: 12.185633811s May 11 13:11:31.034: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Running", Reason="", readiness=true. Elapsed: 14.189472934s May 11 13:11:33.038: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Running", Reason="", readiness=true. Elapsed: 16.193281238s May 11 13:11:35.041: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Running", Reason="", readiness=true. Elapsed: 18.196233515s May 11 13:11:37.045: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Running", Reason="", readiness=true. Elapsed: 20.200165071s May 11 13:11:39.048: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Running", Reason="", readiness=true. Elapsed: 22.203239808s May 11 13:11:41.109: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Running", Reason="", readiness=true. Elapsed: 24.264469738s May 11 13:11:43.114: INFO: Pod "pod-subpath-test-configmap-4x2m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.269101785s STEP: Saw pod success May 11 13:11:43.114: INFO: Pod "pod-subpath-test-configmap-4x2m" satisfied condition "success or failure" May 11 13:11:43.116: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-4x2m container test-container-subpath-configmap-4x2m: STEP: delete the pod May 11 13:11:43.139: INFO: Waiting for pod pod-subpath-test-configmap-4x2m to disappear May 11 13:11:43.162: INFO: Pod pod-subpath-test-configmap-4x2m no longer exists STEP: Deleting pod pod-subpath-test-configmap-4x2m May 11 13:11:43.162: INFO: Deleting pod "pod-subpath-test-configmap-4x2m" in namespace "subpath-2710" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:11:43.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2710" for this suite. May 11 13:11:49.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:11:49.242: INFO: namespace subpath-2710 deletion completed in 6.07291366s • [SLOW TEST:32.515 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:11:49.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 11 13:11:53.882: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7433 pod-service-account-42f5b910-e0a6-4a87-b6a7-a2e9613e05ce -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 11 13:11:54.123: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7433 pod-service-account-42f5b910-e0a6-4a87-b6a7-a2e9613e05ce -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 11 13:11:54.349: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7433 pod-service-account-42f5b910-e0a6-4a87-b6a7-a2e9613e05ce -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:11:54.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7433" for this suite. May 11 13:12:00.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:12:00.631: INFO: namespace svcaccounts-7433 deletion completed in 6.072062953s • [SLOW TEST:11.389 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:12:00.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 13:12:01.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b8156a2-060c-4ecb-8c40-9cba716ee46d" in namespace "downward-api-3590" to be "success or failure" May 11 13:12:01.206: INFO: Pod "downwardapi-volume-6b8156a2-060c-4ecb-8c40-9cba716ee46d": Phase="Pending", Reason="", readiness=false. Elapsed: 156.134612ms May 11 13:12:03.239: INFO: Pod "downwardapi-volume-6b8156a2-060c-4ecb-8c40-9cba716ee46d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189267441s May 11 13:12:05.243: INFO: Pod "downwardapi-volume-6b8156a2-060c-4ecb-8c40-9cba716ee46d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1933215s May 11 13:12:07.248: INFO: Pod "downwardapi-volume-6b8156a2-060c-4ecb-8c40-9cba716ee46d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198262889s May 11 13:12:09.252: INFO: Pod "downwardapi-volume-6b8156a2-060c-4ecb-8c40-9cba716ee46d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.201578861s STEP: Saw pod success May 11 13:12:09.252: INFO: Pod "downwardapi-volume-6b8156a2-060c-4ecb-8c40-9cba716ee46d" satisfied condition "success or failure" May 11 13:12:09.254: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6b8156a2-060c-4ecb-8c40-9cba716ee46d container client-container: STEP: delete the pod May 11 13:12:09.621: INFO: Waiting for pod downwardapi-volume-6b8156a2-060c-4ecb-8c40-9cba716ee46d to disappear May 11 13:12:09.694: INFO: Pod downwardapi-volume-6b8156a2-060c-4ecb-8c40-9cba716ee46d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:12:09.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3590" for this suite. May 11 13:12:15.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:12:15.816: INFO: namespace downward-api-3590 deletion completed in 6.119397822s • [SLOW TEST:15.185 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:12:15.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-49c3af9e-a5b9-4d3f-bee1-33291e15fe2c STEP: Creating the pod STEP: Updating configmap configmap-test-upd-49c3af9e-a5b9-4d3f-bee1-33291e15fe2c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:13:50.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7658" for this suite. May 11 13:14:14.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:14:14.800: INFO: namespace configmap-7658 deletion completed in 24.136280595s • [SLOW TEST:118.982 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:14:14.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 11 13:14:15.006: INFO: Waiting up to 5m0s for pod "client-containers-e3d62b33-0a0d-419a-9994-bad2070ea808" in namespace "containers-445" to be "success or failure" May 11 13:14:15.080: INFO: Pod "client-containers-e3d62b33-0a0d-419a-9994-bad2070ea808": Phase="Pending", Reason="", readiness=false. Elapsed: 74.668487ms May 11 13:14:17.084: INFO: Pod "client-containers-e3d62b33-0a0d-419a-9994-bad2070ea808": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078454673s May 11 13:14:19.123: INFO: Pod "client-containers-e3d62b33-0a0d-419a-9994-bad2070ea808": Phase="Running", Reason="", readiness=true. Elapsed: 4.11674942s May 11 13:14:21.126: INFO: Pod "client-containers-e3d62b33-0a0d-419a-9994-bad2070ea808": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.120275313s STEP: Saw pod success May 11 13:14:21.126: INFO: Pod "client-containers-e3d62b33-0a0d-419a-9994-bad2070ea808" satisfied condition "success or failure" May 11 13:14:21.128: INFO: Trying to get logs from node iruya-worker2 pod client-containers-e3d62b33-0a0d-419a-9994-bad2070ea808 container test-container: STEP: delete the pod May 11 13:14:21.423: INFO: Waiting for pod client-containers-e3d62b33-0a0d-419a-9994-bad2070ea808 to disappear May 11 13:14:21.656: INFO: Pod client-containers-e3d62b33-0a0d-419a-9994-bad2070ea808 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:14:21.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-445" for this suite. May 11 13:14:27.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:14:27.911: INFO: namespace containers-445 deletion completed in 6.251845409s • [SLOW TEST:13.111 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:14:27.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 13:14:28.016: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f115b737-fd42-4c70-b06d-435d1f44f30b" in namespace "projected-4509" to be "success or failure" May 11 13:14:28.034: INFO: Pod "downwardapi-volume-f115b737-fd42-4c70-b06d-435d1f44f30b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.297213ms May 11 13:14:30.297: INFO: Pod "downwardapi-volume-f115b737-fd42-4c70-b06d-435d1f44f30b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281418982s May 11 13:14:32.410: INFO: Pod "downwardapi-volume-f115b737-fd42-4c70-b06d-435d1f44f30b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394449312s May 11 13:14:34.414: INFO: Pod "downwardapi-volume-f115b737-fd42-4c70-b06d-435d1f44f30b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.397595848s STEP: Saw pod success May 11 13:14:34.414: INFO: Pod "downwardapi-volume-f115b737-fd42-4c70-b06d-435d1f44f30b" satisfied condition "success or failure" May 11 13:14:34.416: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f115b737-fd42-4c70-b06d-435d1f44f30b container client-container: STEP: delete the pod May 11 13:14:34.509: INFO: Waiting for pod downwardapi-volume-f115b737-fd42-4c70-b06d-435d1f44f30b to disappear May 11 13:14:34.602: INFO: Pod downwardapi-volume-f115b737-fd42-4c70-b06d-435d1f44f30b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:14:34.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4509" for this suite. May 11 13:14:40.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:14:40.675: INFO: namespace projected-4509 deletion completed in 6.070485865s • [SLOW TEST:12.764 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:14:40.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 11 13:14:47.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-a7e094aa-039c-41bc-9bc1-8cfb029d7d07 -c busybox-main-container --namespace=emptydir-4344 -- cat /usr/share/volumeshare/shareddata.txt' May 11 13:14:47.256: INFO: stderr: "I0511 13:14:47.175375 1238 log.go:172] (0xc0009e2420) (0xc0003d2a00) Create stream\nI0511 13:14:47.175424 1238 log.go:172] (0xc0009e2420) (0xc0003d2a00) Stream added, broadcasting: 1\nI0511 13:14:47.177588 1238 log.go:172] (0xc0009e2420) Reply frame received for 1\nI0511 13:14:47.177880 1238 log.go:172] (0xc0009e2420) (0xc00071c000) Create stream\nI0511 13:14:47.177898 1238 log.go:172] (0xc0009e2420) (0xc00071c000) Stream added, broadcasting: 3\nI0511 13:14:47.180005 1238 log.go:172] (0xc0009e2420) Reply frame received for 3\nI0511 13:14:47.180263 1238 log.go:172] (0xc0009e2420) (0xc0003d2aa0) Create stream\nI0511 13:14:47.180290 1238 log.go:172] (0xc0009e2420) (0xc0003d2aa0) Stream added, broadcasting: 5\nI0511 13:14:47.180948 1238 log.go:172] (0xc0009e2420) Reply frame received for 5\nI0511 13:14:47.251040 1238 log.go:172] (0xc0009e2420) Data frame received for 5\nI0511 13:14:47.251073 1238 log.go:172] (0xc0003d2aa0) (5) Data frame handling\nI0511 13:14:47.251093 1238 log.go:172] (0xc0009e2420) Data frame received for 3\nI0511 13:14:47.251105 1238 log.go:172] (0xc00071c000) (3) Data frame handling\nI0511 13:14:47.251115 1238 log.go:172] (0xc00071c000) (3) Data frame sent\nI0511 13:14:47.251127 1238 log.go:172] (0xc0009e2420) Data frame received for 3\nI0511 13:14:47.251137 1238 log.go:172] (0xc00071c000) (3) Data frame handling\nI0511 13:14:47.252320 1238 log.go:172] (0xc0009e2420) Data frame received for 1\nI0511 13:14:47.252332 1238 log.go:172] (0xc0003d2a00) (1) Data frame handling\nI0511 13:14:47.252339 1238 log.go:172] (0xc0003d2a00) (1) Data frame sent\nI0511 13:14:47.252476 1238 log.go:172] (0xc0009e2420) (0xc0003d2a00) Stream removed, broadcasting: 1\nI0511 13:14:47.252505 1238 log.go:172] (0xc0009e2420) Go away received\nI0511 13:14:47.252890 1238 log.go:172] (0xc0009e2420) (0xc0003d2a00) Stream removed, broadcasting: 1\nI0511 13:14:47.252909 1238 log.go:172] (0xc0009e2420) (0xc00071c000) Stream removed, broadcasting: 3\nI0511 13:14:47.252933 1238 log.go:172] (0xc0009e2420) (0xc0003d2aa0) Stream removed, broadcasting: 5\n" May 11 13:14:47.256: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:14:47.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4344" for this suite. May 11 13:14:53.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:14:53.337: INFO: namespace emptydir-4344 deletion completed in 6.078596024s • [SLOW TEST:12.662 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:14:53.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-aba157e3-e99a-40ed-94ad-74f5c0380834 STEP: Creating a pod to test consume secrets May 11 13:14:53.432: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3f4e437f-2552-4396-a05b-3be43552a58f" in namespace "projected-9810" to be "success or failure" May 11 13:14:53.489: INFO: Pod "pod-projected-secrets-3f4e437f-2552-4396-a05b-3be43552a58f": Phase="Pending", Reason="", readiness=false. Elapsed: 57.211577ms May 11 13:14:55.493: INFO: Pod "pod-projected-secrets-3f4e437f-2552-4396-a05b-3be43552a58f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061347093s May 11 13:14:57.497: INFO: Pod "pod-projected-secrets-3f4e437f-2552-4396-a05b-3be43552a58f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064673519s May 11 13:14:59.500: INFO: Pod "pod-projected-secrets-3f4e437f-2552-4396-a05b-3be43552a58f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067993042s STEP: Saw pod success May 11 13:14:59.500: INFO: Pod "pod-projected-secrets-3f4e437f-2552-4396-a05b-3be43552a58f" satisfied condition "success or failure" May 11 13:14:59.502: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-3f4e437f-2552-4396-a05b-3be43552a58f container projected-secret-volume-test: STEP: delete the pod May 11 13:14:59.653: INFO: Waiting for pod pod-projected-secrets-3f4e437f-2552-4396-a05b-3be43552a58f to disappear May 11 13:14:59.661: INFO: Pod pod-projected-secrets-3f4e437f-2552-4396-a05b-3be43552a58f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:14:59.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9810" for this suite. May 11 13:15:05.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:15:05.794: INFO: namespace projected-9810 deletion completed in 6.130358948s • [SLOW TEST:12.457 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:15:05.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 11 13:15:05.907: INFO: Pod name pod-release: Found 0 pods out of 1 May 11 13:15:10.911: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:15:11.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4392" for this suite. May 11 13:15:18.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:15:18.086: INFO: namespace replication-controller-4392 deletion completed in 6.130805743s • [SLOW TEST:12.292 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:15:18.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 11 13:15:18.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5384' May 11 13:15:18.734: INFO: stderr: "" May 11 13:15:18.734: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 13:15:18.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5384' May 11 13:15:18.885: INFO: stderr: "" May 11 13:15:18.885: INFO: stdout: "update-demo-nautilus-9b4tp update-demo-nautilus-m482l " May 11 13:15:18.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9b4tp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5384' May 11 13:15:19.028: INFO: stderr: "" May 11 13:15:19.028: INFO: stdout: "" May 11 13:15:19.028: INFO: update-demo-nautilus-9b4tp is created but not running May 11 13:15:24.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5384' May 11 13:15:24.136: INFO: stderr: "" May 11 13:15:24.136: INFO: stdout: "update-demo-nautilus-9b4tp update-demo-nautilus-m482l " May 11 13:15:24.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9b4tp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5384' May 11 13:15:24.231: INFO: stderr: "" May 11 13:15:24.231: INFO: stdout: "true" May 11 13:15:24.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9b4tp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5384' May 11 13:15:24.370: INFO: stderr: "" May 11 13:15:24.370: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 13:15:24.370: INFO: validating pod update-demo-nautilus-9b4tp May 11 13:15:24.375: INFO: got data: { "image": "nautilus.jpg" } May 11 13:15:24.375: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 13:15:24.375: INFO: update-demo-nautilus-9b4tp is verified up and running May 11 13:15:24.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m482l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5384' May 11 13:15:24.469: INFO: stderr: "" May 11 13:15:24.469: INFO: stdout: "true" May 11 13:15:24.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m482l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5384' May 11 13:15:24.560: INFO: stderr: "" May 11 13:15:24.560: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 13:15:24.560: INFO: validating pod update-demo-nautilus-m482l May 11 13:15:24.564: INFO: got data: { "image": "nautilus.jpg" } May 11 13:15:24.564: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 13:15:24.564: INFO: update-demo-nautilus-m482l is verified up and running STEP: using delete to clean up resources May 11 13:15:24.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5384' May 11 13:15:24.674: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 13:15:24.674: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 13:15:24.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5384' May 11 13:15:24.767: INFO: stderr: "No resources found.\n" May 11 13:15:24.767: INFO: stdout: "" May 11 13:15:24.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5384 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 13:15:24.859: INFO: stderr: "" May 11 13:15:24.859: INFO: stdout: "update-demo-nautilus-9b4tp\nupdate-demo-nautilus-m482l\n" May 11 13:15:25.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5384' May 11 13:15:25.467: INFO: stderr: "No resources found.\n" May 11 13:15:25.467: INFO: stdout: "" May 11 13:15:25.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5384 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 13:15:25.651: INFO: stderr: "" May 11 13:15:25.651: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:15:25.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5384" for this suite. May 11 13:15:47.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:15:47.793: INFO: namespace kubectl-5384 deletion completed in 22.137685673s • [SLOW TEST:29.706 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:15:47.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 11 13:15:47.868: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 13:15:47.906: INFO: Waiting for terminating namespaces to be deleted... May 11 13:15:47.909: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 11 13:15:47.915: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 11 13:15:47.915: INFO: Container kube-proxy ready: true, restart count 0 May 11 13:15:47.915: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 11 13:15:47.915: INFO: Container kindnet-cni ready: true, restart count 0 May 11 13:15:47.915: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 11 13:15:47.921: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 11 13:15:47.921: INFO: Container coredns ready: true, restart count 0 May 11 13:15:47.921: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 11 13:15:47.921: INFO: Container coredns ready: true, restart count 0 May 11 13:15:47.921: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 11 13:15:47.921: INFO: Container kindnet-cni ready: true, restart count 0 May 11 13:15:47.921: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 11 13:15:47.921: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160dfba943d732d7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:15:48.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2369" for this suite. May 11 13:15:55.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:15:55.080: INFO: namespace sched-pred-2369 deletion completed in 6.13232302s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.287 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:15:55.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 11 13:15:55.257: INFO: Waiting up to 5m0s for pod "pod-afd6da13-1667-4110-91df-7701e86d08a9" in namespace "emptydir-6467" to be "success or failure" May 11 13:15:55.274: INFO: Pod "pod-afd6da13-1667-4110-91df-7701e86d08a9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.690403ms May 11 13:15:57.277: INFO: Pod "pod-afd6da13-1667-4110-91df-7701e86d08a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019882538s May 11 13:15:59.316: INFO: Pod "pod-afd6da13-1667-4110-91df-7701e86d08a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058479675s STEP: Saw pod success May 11 13:15:59.316: INFO: Pod "pod-afd6da13-1667-4110-91df-7701e86d08a9" satisfied condition "success or failure" May 11 13:15:59.318: INFO: Trying to get logs from node iruya-worker2 pod pod-afd6da13-1667-4110-91df-7701e86d08a9 container test-container: STEP: delete the pod May 11 13:15:59.386: INFO: Waiting for pod pod-afd6da13-1667-4110-91df-7701e86d08a9 to disappear May 11 13:15:59.490: INFO: Pod pod-afd6da13-1667-4110-91df-7701e86d08a9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:15:59.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6467" for this suite. May 11 13:16:05.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:16:05.596: INFO: namespace emptydir-6467 deletion completed in 6.102882199s • [SLOW TEST:10.516 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:16:05.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:16:05.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4896" for this suite. May 11 13:16:11.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:16:11.815: INFO: namespace kubelet-test-4896 deletion completed in 6.071723863s • [SLOW TEST:6.219 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:16:11.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 11 13:16:11.892: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:16:11.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3079" for this suite. May 11 13:16:18.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:16:18.183: INFO: namespace kubectl-3079 deletion completed in 6.159470243s • [SLOW TEST:6.368 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:16:18.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 13:16:46.550: INFO: Container started at 2020-05-11 13:16:21 +0000 UTC, pod became ready at 2020-05-11 13:16:45 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:16:46.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8538" for this suite. May 11 13:17:09.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:17:09.217: INFO: namespace container-probe-8538 deletion completed in 22.664825669s • [SLOW TEST:51.034 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:17:09.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b3b97421-6613-4115-b1d3-1c69dbd55554 STEP: Creating a pod to test consume secrets May 11 13:17:09.864: INFO: Waiting up to 5m0s for pod "pod-secrets-5bef11ca-a382-4966-8dfd-399eea7a68c4" in namespace "secrets-9940" to be "success or failure" May 11 13:17:09.912: INFO: Pod "pod-secrets-5bef11ca-a382-4966-8dfd-399eea7a68c4": Phase="Pending", Reason="", readiness=false. Elapsed: 47.927696ms May 11 13:17:11.917: INFO: Pod "pod-secrets-5bef11ca-a382-4966-8dfd-399eea7a68c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052858418s May 11 13:17:13.921: INFO: Pod "pod-secrets-5bef11ca-a382-4966-8dfd-399eea7a68c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056372662s STEP: Saw pod success May 11 13:17:13.921: INFO: Pod "pod-secrets-5bef11ca-a382-4966-8dfd-399eea7a68c4" satisfied condition "success or failure" May 11 13:17:13.923: INFO: Trying to get logs from node iruya-worker pod pod-secrets-5bef11ca-a382-4966-8dfd-399eea7a68c4 container secret-env-test: STEP: delete the pod May 11 13:17:13.957: INFO: Waiting for pod pod-secrets-5bef11ca-a382-4966-8dfd-399eea7a68c4 to disappear May 11 13:17:14.071: INFO: Pod pod-secrets-5bef11ca-a382-4966-8dfd-399eea7a68c4 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:17:14.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9940" for this suite. May 11 13:17:20.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:17:20.178: INFO: namespace secrets-9940 deletion completed in 6.102966101s • [SLOW TEST:10.960 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:17:20.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 11 13:17:20.258: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 13:17:20.264: INFO: Waiting for terminating namespaces to be deleted... May 11 13:17:20.267: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 11 13:17:20.270: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 11 13:17:20.270: INFO: Container kube-proxy ready: true, restart count 0 May 11 13:17:20.270: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 11 13:17:20.270: INFO: Container kindnet-cni ready: true, restart count 0 May 11 13:17:20.270: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 11 13:17:20.273: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 11 13:17:20.273: INFO: Container kindnet-cni ready: true, restart count 0 May 11 13:17:20.273: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 11 13:17:20.273: INFO: Container kube-proxy ready: true, restart count 0 May 11 13:17:20.273: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 11 13:17:20.273: INFO: Container coredns ready: true, restart count 0 May 11 13:17:20.273: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 11 13:17:20.273: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 11 13:17:20.321: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 11 13:17:20.321: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 11 13:17:20.321: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 11 13:17:20.321: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 11 13:17:20.321: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 11 13:17:20.321: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3f2c6a0e-5c8d-43ec-8018-afb499df0641.160dfbbecd6aa399], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5586/filler-pod-3f2c6a0e-5c8d-43ec-8018-afb499df0641 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f2c6a0e-5c8d-43ec-8018-afb499df0641.160dfbbf569d595e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f2c6a0e-5c8d-43ec-8018-afb499df0641.160dfbbfb80a5efc], Reason = [Created], Message = [Created container filler-pod-3f2c6a0e-5c8d-43ec-8018-afb499df0641] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f2c6a0e-5c8d-43ec-8018-afb499df0641.160dfbbfca52aa48], Reason = [Started], Message = [Started container filler-pod-3f2c6a0e-5c8d-43ec-8018-afb499df0641] STEP: Considering event: Type = [Normal], Name = [filler-pod-bbac1156-be2c-49d8-8bd5-26ca81bd0b2a.160dfbbecd35d975], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5586/filler-pod-bbac1156-be2c-49d8-8bd5-26ca81bd0b2a to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-bbac1156-be2c-49d8-8bd5-26ca81bd0b2a.160dfbbf1ef1dc70], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-bbac1156-be2c-49d8-8bd5-26ca81bd0b2a.160dfbbf8a370345], Reason = [Created], Message = [Created container filler-pod-bbac1156-be2c-49d8-8bd5-26ca81bd0b2a] STEP: Considering event: Type = [Normal], Name = [filler-pod-bbac1156-be2c-49d8-8bd5-26ca81bd0b2a.160dfbbfaae19eb4], Reason = [Started], Message = [Started container filler-pod-bbac1156-be2c-49d8-8bd5-26ca81bd0b2a] STEP: Considering event: Type = [Warning], Name = [additional-pod.160dfbc033d892b8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:17:27.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5586" for this suite. May 11 13:17:35.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:17:35.779: INFO: namespace sched-pred-5586 deletion completed in 8.142794869s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:15.601 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:17:35.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 13:17:35.925: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56daadae-2fcd-40b7-84ee-b0ebb5c237f2" in namespace "downward-api-4996" to be "success or failure" May 11 13:17:35.937: INFO: Pod "downwardapi-volume-56daadae-2fcd-40b7-84ee-b0ebb5c237f2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.941788ms May 11 13:17:38.150: INFO: Pod "downwardapi-volume-56daadae-2fcd-40b7-84ee-b0ebb5c237f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224884711s May 11 13:17:40.153: INFO: Pod "downwardapi-volume-56daadae-2fcd-40b7-84ee-b0ebb5c237f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.228464815s STEP: Saw pod success May 11 13:17:40.153: INFO: Pod "downwardapi-volume-56daadae-2fcd-40b7-84ee-b0ebb5c237f2" satisfied condition "success or failure" May 11 13:17:40.156: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-56daadae-2fcd-40b7-84ee-b0ebb5c237f2 container client-container: STEP: delete the pod May 11 13:17:40.266: INFO: Waiting for pod downwardapi-volume-56daadae-2fcd-40b7-84ee-b0ebb5c237f2 to disappear May 11 13:17:40.278: INFO: Pod downwardapi-volume-56daadae-2fcd-40b7-84ee-b0ebb5c237f2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:17:40.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4996" for this suite. May 11 13:17:46.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:17:46.348: INFO: namespace downward-api-4996 deletion completed in 6.06749083s • [SLOW TEST:10.570 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:17:46.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 11 13:17:51.063: INFO: Successfully updated pod "labelsupdate1b1e8dcb-1f33-4f9f-86e1-ae15fd22707e" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:17:55.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4103" for this suite. May 11 13:18:17.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:18:17.292: INFO: namespace downward-api-4103 deletion completed in 22.181540329s • [SLOW TEST:30.943 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:18:17.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-587a3e61-4129-49dd-8f48-24d40522e305 STEP: Creating a pod to test consume configMaps May 11 13:18:17.505: INFO: Waiting up to 5m0s for pod "pod-configmaps-3b4c3b27-61c4-459a-b41f-c47b2c6102b2" in namespace "configmap-5072" to be "success or failure" May 11 13:18:17.581: INFO: Pod "pod-configmaps-3b4c3b27-61c4-459a-b41f-c47b2c6102b2": Phase="Pending", Reason="", readiness=false. Elapsed: 76.384332ms May 11 13:18:19.584: INFO: Pod "pod-configmaps-3b4c3b27-61c4-459a-b41f-c47b2c6102b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079432702s May 11 13:18:21.588: INFO: Pod "pod-configmaps-3b4c3b27-61c4-459a-b41f-c47b2c6102b2": Phase="Running", Reason="", readiness=true. Elapsed: 4.08283886s May 11 13:18:23.591: INFO: Pod "pod-configmaps-3b4c3b27-61c4-459a-b41f-c47b2c6102b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086310406s STEP: Saw pod success May 11 13:18:23.591: INFO: Pod "pod-configmaps-3b4c3b27-61c4-459a-b41f-c47b2c6102b2" satisfied condition "success or failure" May 11 13:18:23.594: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-3b4c3b27-61c4-459a-b41f-c47b2c6102b2 container configmap-volume-test: STEP: delete the pod May 11 13:18:23.640: INFO: Waiting for pod pod-configmaps-3b4c3b27-61c4-459a-b41f-c47b2c6102b2 to disappear May 11 13:18:23.743: INFO: Pod pod-configmaps-3b4c3b27-61c4-459a-b41f-c47b2c6102b2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:18:23.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5072" for this suite. May 11 13:18:29.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:18:29.926: INFO: namespace configmap-5072 deletion completed in 6.179940574s • [SLOW TEST:12.634 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:18:29.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:19:30.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3985" for this suite. May 11 13:19:50.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:19:50.164: INFO: namespace container-probe-3985 deletion completed in 20.096339762s • [SLOW TEST:80.238 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:19:50.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 11 13:19:50.219: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:19:59.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6865" for this suite. May 11 13:20:21.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:20:21.810: INFO: namespace init-container-6865 deletion completed in 22.143975918s • [SLOW TEST:31.646 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:20:21.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 11 13:20:21.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 11 13:20:22.140: INFO: stderr: "" May 11 13:20:22.140: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:20:22.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-394" for this suite. May 11 13:20:28.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:20:28.242: INFO: namespace kubectl-394 deletion completed in 6.098333286s • [SLOW TEST:6.432 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:20:28.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-f23a0146-cbf1-48aa-90eb-dcf3f1d012a7 STEP: Creating a pod to test consume secrets May 11 13:20:28.338: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f82c0bdd-7d4f-4e86-ab64-c6bac3498ce0" in namespace "projected-5418" to be "success or failure" May 11 13:20:28.349: INFO: Pod "pod-projected-secrets-f82c0bdd-7d4f-4e86-ab64-c6bac3498ce0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.596609ms May 11 13:20:30.352: INFO: Pod "pod-projected-secrets-f82c0bdd-7d4f-4e86-ab64-c6bac3498ce0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013599778s May 11 13:20:32.355: INFO: Pod "pod-projected-secrets-f82c0bdd-7d4f-4e86-ab64-c6bac3498ce0": Phase="Running", Reason="", readiness=true. Elapsed: 4.016795651s May 11 13:20:34.358: INFO: Pod "pod-projected-secrets-f82c0bdd-7d4f-4e86-ab64-c6bac3498ce0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019892217s STEP: Saw pod success May 11 13:20:34.358: INFO: Pod "pod-projected-secrets-f82c0bdd-7d4f-4e86-ab64-c6bac3498ce0" satisfied condition "success or failure" May 11 13:20:34.360: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-f82c0bdd-7d4f-4e86-ab64-c6bac3498ce0 container projected-secret-volume-test: STEP: delete the pod May 11 13:20:34.379: INFO: Waiting for pod pod-projected-secrets-f82c0bdd-7d4f-4e86-ab64-c6bac3498ce0 to disappear May 11 13:20:34.383: INFO: Pod pod-projected-secrets-f82c0bdd-7d4f-4e86-ab64-c6bac3498ce0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:20:34.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5418" for this suite. May 11 13:20:40.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:20:40.551: INFO: namespace projected-5418 deletion completed in 6.164256151s • [SLOW TEST:12.308 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:20:40.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 13:20:41.299: INFO: Waiting up to 5m0s for pod "pod-1be51329-e15b-4656-b30e-5e9cb8a5e058" in namespace "emptydir-9822" to be "success or failure" May 11 13:20:41.445: INFO: Pod "pod-1be51329-e15b-4656-b30e-5e9cb8a5e058": Phase="Pending", Reason="", readiness=false. Elapsed: 145.840825ms May 11 13:20:43.517: INFO: Pod "pod-1be51329-e15b-4656-b30e-5e9cb8a5e058": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217851336s May 11 13:20:45.529: INFO: Pod "pod-1be51329-e15b-4656-b30e-5e9cb8a5e058": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229479489s May 11 13:20:47.532: INFO: Pod "pod-1be51329-e15b-4656-b30e-5e9cb8a5e058": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.232561413s STEP: Saw pod success May 11 13:20:47.532: INFO: Pod "pod-1be51329-e15b-4656-b30e-5e9cb8a5e058" satisfied condition "success or failure" May 11 13:20:47.534: INFO: Trying to get logs from node iruya-worker2 pod pod-1be51329-e15b-4656-b30e-5e9cb8a5e058 container test-container: STEP: delete the pod May 11 13:20:47.554: INFO: Waiting for pod pod-1be51329-e15b-4656-b30e-5e9cb8a5e058 to disappear May 11 13:20:47.565: INFO: Pod pod-1be51329-e15b-4656-b30e-5e9cb8a5e058 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:20:47.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9822" for this suite. May 11 13:20:53.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:20:53.663: INFO: namespace emptydir-9822 deletion completed in 6.09558187s • [SLOW TEST:13.112 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:20:53.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 13:21:05.782: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:05.816: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:07.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:07.820: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:09.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:09.820: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:11.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:11.820: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:13.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:13.820: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:15.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:15.822: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:17.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:17.820: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:19.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:19.819: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:21.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:21.821: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:23.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:23.820: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:25.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:25.820: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:27.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:27.820: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:29.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:29.821: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:31.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:31.822: INFO: Pod pod-with-poststart-exec-hook still exists May 11 13:21:33.816: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 13:21:33.819: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:21:33.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3071" for this suite. May 11 13:21:57.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:21:58.062: INFO: namespace container-lifecycle-hook-3071 deletion completed in 24.2399597s • [SLOW TEST:64.399 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:21:58.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 11 13:21:58.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9976' May 11 13:22:02.447: INFO: stderr: "" May 11 13:22:02.447: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 13:22:02.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9976' May 11 13:22:02.574: INFO: stderr: "" May 11 13:22:02.574: INFO: stdout: "update-demo-nautilus-4dv6j update-demo-nautilus-gllf7 " May 11 13:22:02.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dv6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:02.698: INFO: stderr: "" May 11 13:22:02.698: INFO: stdout: "" May 11 13:22:02.698: INFO: update-demo-nautilus-4dv6j is created but not running May 11 13:22:07.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9976' May 11 13:22:07.804: INFO: stderr: "" May 11 13:22:07.804: INFO: stdout: "update-demo-nautilus-4dv6j update-demo-nautilus-gllf7 " May 11 13:22:07.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dv6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:07.896: INFO: stderr: "" May 11 13:22:07.896: INFO: stdout: "true" May 11 13:22:07.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dv6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:07.985: INFO: stderr: "" May 11 13:22:07.985: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 13:22:07.985: INFO: validating pod update-demo-nautilus-4dv6j May 11 13:22:07.988: INFO: got data: { "image": "nautilus.jpg" } May 11 13:22:07.988: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 13:22:07.988: INFO: update-demo-nautilus-4dv6j is verified up and running May 11 13:22:07.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gllf7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:08.074: INFO: stderr: "" May 11 13:22:08.074: INFO: stdout: "true" May 11 13:22:08.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gllf7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:08.161: INFO: stderr: "" May 11 13:22:08.161: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 13:22:08.161: INFO: validating pod update-demo-nautilus-gllf7 May 11 13:22:08.165: INFO: got data: { "image": "nautilus.jpg" } May 11 13:22:08.165: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 13:22:08.165: INFO: update-demo-nautilus-gllf7 is verified up and running STEP: scaling down the replication controller May 11 13:22:08.167: INFO: scanned /root for discovery docs: May 11 13:22:08.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9976' May 11 13:22:09.318: INFO: stderr: "" May 11 13:22:09.318: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 13:22:09.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9976' May 11 13:22:09.411: INFO: stderr: "" May 11 13:22:09.411: INFO: stdout: "update-demo-nautilus-4dv6j update-demo-nautilus-gllf7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 13:22:14.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9976' May 11 13:22:14.507: INFO: stderr: "" May 11 13:22:14.507: INFO: stdout: "update-demo-nautilus-4dv6j update-demo-nautilus-gllf7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 13:22:19.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9976' May 11 13:22:19.606: INFO: stderr: "" May 11 13:22:19.606: INFO: stdout: "update-demo-nautilus-4dv6j update-demo-nautilus-gllf7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 13:22:24.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9976' May 11 13:22:24.775: INFO: stderr: "" May 11 13:22:24.775: INFO: stdout: "update-demo-nautilus-4dv6j " May 11 13:22:24.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dv6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:24.871: INFO: stderr: "" May 11 13:22:24.871: INFO: stdout: "true" May 11 13:22:24.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dv6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:24.956: INFO: stderr: "" May 11 13:22:24.956: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 13:22:24.956: INFO: validating pod update-demo-nautilus-4dv6j May 11 13:22:24.959: INFO: got data: { "image": "nautilus.jpg" } May 11 13:22:24.959: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 13:22:24.959: INFO: update-demo-nautilus-4dv6j is verified up and running STEP: scaling up the replication controller May 11 13:22:24.961: INFO: scanned /root for discovery docs: May 11 13:22:24.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9976' May 11 13:22:26.156: INFO: stderr: "" May 11 13:22:26.156: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 13:22:26.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9976' May 11 13:22:26.263: INFO: stderr: "" May 11 13:22:26.263: INFO: stdout: "update-demo-nautilus-4dv6j update-demo-nautilus-mfzwr " May 11 13:22:26.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dv6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:26.366: INFO: stderr: "" May 11 13:22:26.366: INFO: stdout: "true" May 11 13:22:26.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dv6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:26.458: INFO: stderr: "" May 11 13:22:26.458: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 13:22:26.458: INFO: validating pod update-demo-nautilus-4dv6j May 11 13:22:26.460: INFO: got data: { "image": "nautilus.jpg" } May 11 13:22:26.460: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 13:22:26.460: INFO: update-demo-nautilus-4dv6j is verified up and running May 11 13:22:26.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mfzwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:26.555: INFO: stderr: "" May 11 13:22:26.555: INFO: stdout: "" May 11 13:22:26.555: INFO: update-demo-nautilus-mfzwr is created but not running May 11 13:22:31.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9976' May 11 13:22:31.661: INFO: stderr: "" May 11 13:22:31.661: INFO: stdout: "update-demo-nautilus-4dv6j update-demo-nautilus-mfzwr " May 11 13:22:31.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dv6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:31.750: INFO: stderr: "" May 11 13:22:31.750: INFO: stdout: "true" May 11 13:22:31.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4dv6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:31.872: INFO: stderr: "" May 11 13:22:31.872: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 13:22:31.872: INFO: validating pod update-demo-nautilus-4dv6j May 11 13:22:31.876: INFO: got data: { "image": "nautilus.jpg" } May 11 13:22:31.876: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 13:22:31.876: INFO: update-demo-nautilus-4dv6j is verified up and running May 11 13:22:31.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mfzwr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:31.971: INFO: stderr: "" May 11 13:22:31.971: INFO: stdout: "true" May 11 13:22:31.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mfzwr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9976' May 11 13:22:32.061: INFO: stderr: "" May 11 13:22:32.061: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 13:22:32.061: INFO: validating pod update-demo-nautilus-mfzwr May 11 13:22:32.065: INFO: got data: { "image": "nautilus.jpg" } May 11 13:22:32.065: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 13:22:32.065: INFO: update-demo-nautilus-mfzwr is verified up and running STEP: using delete to clean up resources May 11 13:22:32.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9976' May 11 13:22:32.211: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 13:22:32.211: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 13:22:32.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9976' May 11 13:22:32.307: INFO: stderr: "No resources found.\n" May 11 13:22:32.307: INFO: stdout: "" May 11 13:22:32.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9976 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 13:22:32.455: INFO: stderr: "" May 11 13:22:32.455: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:22:32.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9976" for this suite. May 11 13:22:54.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:22:54.529: INFO: namespace kubectl-9976 deletion completed in 22.07117994s • [SLOW TEST:56.466 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:22:54.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 13:22:55.441: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 11 13:22:55.484: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:22:55.620: INFO: Number of nodes with available pods: 0 May 11 13:22:55.620: INFO: Node iruya-worker is running more than one daemon pod May 11 13:22:56.624: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:22:56.627: INFO: Number of nodes with available pods: 0 May 11 13:22:56.627: INFO: Node iruya-worker is running more than one daemon pod May 11 13:22:57.813: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:22:57.816: INFO: Number of nodes with available pods: 0 May 11 13:22:57.816: INFO: Node iruya-worker is running more than one daemon pod May 11 13:22:58.626: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:22:58.629: INFO: Number of nodes with available pods: 0 May 11 13:22:58.629: INFO: Node iruya-worker is running more than one daemon pod May 11 13:22:59.664: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:22:59.667: INFO: Number of nodes with available pods: 0 May 11 13:22:59.667: INFO: Node iruya-worker is running more than one daemon pod May 11 13:23:00.640: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:00.643: INFO: Number of nodes with available pods: 1 May 11 13:23:00.643: INFO: Node iruya-worker is running more than one daemon pod May 11 13:23:01.626: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:01.630: INFO: Number of nodes with available pods: 2 May 11 13:23:01.630: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 11 13:23:01.708: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:01.708: INFO: Wrong image for pod: daemon-set-qsqgs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:01.748: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:02.754: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:02.754: INFO: Wrong image for pod: daemon-set-qsqgs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:02.758: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:03.754: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:03.754: INFO: Wrong image for pod: daemon-set-qsqgs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:03.758: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:04.751: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:04.751: INFO: Wrong image for pod: daemon-set-qsqgs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:04.751: INFO: Pod daemon-set-qsqgs is not available May 11 13:23:04.755: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:05.751: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:05.751: INFO: Wrong image for pod: daemon-set-qsqgs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:05.751: INFO: Pod daemon-set-qsqgs is not available May 11 13:23:05.755: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:06.752: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:06.752: INFO: Wrong image for pod: daemon-set-qsqgs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:06.752: INFO: Pod daemon-set-qsqgs is not available May 11 13:23:06.755: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:07.751: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:07.751: INFO: Wrong image for pod: daemon-set-qsqgs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:07.751: INFO: Pod daemon-set-qsqgs is not available May 11 13:23:07.754: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:08.753: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:08.753: INFO: Wrong image for pod: daemon-set-qsqgs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:08.753: INFO: Pod daemon-set-qsqgs is not available May 11 13:23:08.756: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:09.752: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:09.752: INFO: Wrong image for pod: daemon-set-qsqgs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:09.752: INFO: Pod daemon-set-qsqgs is not available May 11 13:23:09.756: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:10.752: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:10.752: INFO: Wrong image for pod: daemon-set-qsqgs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:10.752: INFO: Pod daemon-set-qsqgs is not available May 11 13:23:10.754: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:11.752: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:11.752: INFO: Wrong image for pod: daemon-set-qsqgs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:11.752: INFO: Pod daemon-set-qsqgs is not available May 11 13:23:11.756: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:12.753: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:12.753: INFO: Pod daemon-set-6k74g is not available May 11 13:23:12.756: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:13.947: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:13.947: INFO: Pod daemon-set-6k74g is not available May 11 13:23:13.952: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:14.753: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:14.753: INFO: Pod daemon-set-6k74g is not available May 11 13:23:14.756: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:15.801: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:15.904: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:16.751: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:16.755: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:17.770: INFO: Wrong image for pod: daemon-set-6cm4f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 11 13:23:17.770: INFO: Pod daemon-set-6cm4f is not available May 11 13:23:17.773: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:18.751: INFO: Pod daemon-set-j6h8r is not available May 11 13:23:18.754: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 11 13:23:18.756: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:18.758: INFO: Number of nodes with available pods: 1 May 11 13:23:18.758: INFO: Node iruya-worker is running more than one daemon pod May 11 13:23:19.891: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:19.894: INFO: Number of nodes with available pods: 1 May 11 13:23:19.894: INFO: Node iruya-worker is running more than one daemon pod May 11 13:23:20.814: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:20.816: INFO: Number of nodes with available pods: 1 May 11 13:23:20.816: INFO: Node iruya-worker is running more than one daemon pod May 11 13:23:21.790: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:21.862: INFO: Number of nodes with available pods: 1 May 11 13:23:21.862: INFO: Node iruya-worker is running more than one daemon pod May 11 13:23:22.761: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:22.763: INFO: Number of nodes with available pods: 1 May 11 13:23:22.763: INFO: Node iruya-worker is running more than one daemon pod May 11 13:23:23.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:23:23.764: INFO: Number of nodes with available pods: 2 May 11 13:23:23.764: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6116, will wait for the garbage collector to delete the pods May 11 13:23:23.828: INFO: Deleting DaemonSet.extensions daemon-set took: 5.048388ms May 11 13:23:24.328: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.313201ms May 11 13:23:32.131: INFO: Number of nodes with available pods: 0 May 11 13:23:32.131: INFO: Number of running nodes: 0, number of available pods: 0 May 11 13:23:32.134: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6116/daemonsets","resourceVersion":"10249916"},"items":null} May 11 13:23:32.136: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6116/pods","resourceVersion":"10249916"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:23:32.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6116" for this suite. May 11 13:23:38.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:23:38.372: INFO: namespace daemonsets-6116 deletion completed in 6.226079617s • [SLOW TEST:43.843 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:23:38.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 11 13:23:38.481: INFO: Waiting up to 5m0s for pod "downward-api-bc939e72-619d-4348-9b16-fbae73436eb5" in namespace "downward-api-8015" to be "success or failure" May 11 13:23:38.484: INFO: Pod "downward-api-bc939e72-619d-4348-9b16-fbae73436eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.840104ms May 11 13:23:40.488: INFO: Pod "downward-api-bc939e72-619d-4348-9b16-fbae73436eb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007164674s May 11 13:23:42.492: INFO: Pod "downward-api-bc939e72-619d-4348-9b16-fbae73436eb5": Phase="Running", Reason="", readiness=true. Elapsed: 4.010380462s May 11 13:23:44.494: INFO: Pod "downward-api-bc939e72-619d-4348-9b16-fbae73436eb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013240536s STEP: Saw pod success May 11 13:23:44.494: INFO: Pod "downward-api-bc939e72-619d-4348-9b16-fbae73436eb5" satisfied condition "success or failure" May 11 13:23:44.496: INFO: Trying to get logs from node iruya-worker2 pod downward-api-bc939e72-619d-4348-9b16-fbae73436eb5 container dapi-container: STEP: delete the pod May 11 13:23:44.509: INFO: Waiting for pod downward-api-bc939e72-619d-4348-9b16-fbae73436eb5 to disappear May 11 13:23:44.561: INFO: Pod downward-api-bc939e72-619d-4348-9b16-fbae73436eb5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:23:44.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8015" for this suite. May 11 13:23:50.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:23:50.689: INFO: namespace downward-api-8015 deletion completed in 6.125112359s • [SLOW TEST:12.317 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:23:50.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1980/configmap-test-d2c44558-504f-4151-a994-7ccbb3110983 STEP: Creating a pod to test consume configMaps May 11 13:23:50.786: INFO: Waiting up to 5m0s for pod "pod-configmaps-10a45095-74a3-44be-a399-e289fdd81b40" in namespace "configmap-1980" to be "success or failure" May 11 13:23:50.800: INFO: Pod "pod-configmaps-10a45095-74a3-44be-a399-e289fdd81b40": Phase="Pending", Reason="", readiness=false. Elapsed: 13.744281ms May 11 13:23:53.101: INFO: Pod "pod-configmaps-10a45095-74a3-44be-a399-e289fdd81b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314357433s May 11 13:23:55.104: INFO: Pod "pod-configmaps-10a45095-74a3-44be-a399-e289fdd81b40": Phase="Running", Reason="", readiness=true. Elapsed: 4.317931946s May 11 13:23:57.108: INFO: Pod "pod-configmaps-10a45095-74a3-44be-a399-e289fdd81b40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.321757054s STEP: Saw pod success May 11 13:23:57.108: INFO: Pod "pod-configmaps-10a45095-74a3-44be-a399-e289fdd81b40" satisfied condition "success or failure" May 11 13:23:57.110: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-10a45095-74a3-44be-a399-e289fdd81b40 container env-test: STEP: delete the pod May 11 13:23:57.132: INFO: Waiting for pod pod-configmaps-10a45095-74a3-44be-a399-e289fdd81b40 to disappear May 11 13:23:57.137: INFO: Pod pod-configmaps-10a45095-74a3-44be-a399-e289fdd81b40 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:23:57.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1980" for this suite. May 11 13:24:03.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:24:03.230: INFO: namespace configmap-1980 deletion completed in 6.090909082s • [SLOW TEST:12.541 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:24:03.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-e0613340-0bfd-4797-bb22-89094dc85a83 STEP: Creating a pod to test consume configMaps May 11 13:24:03.295: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-10d51743-7324-4232-b10c-ebe7b9debfe9" in namespace "projected-3260" to be "success or failure" May 11 13:24:03.299: INFO: Pod "pod-projected-configmaps-10d51743-7324-4232-b10c-ebe7b9debfe9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023103ms May 11 13:24:05.341: INFO: Pod "pod-projected-configmaps-10d51743-7324-4232-b10c-ebe7b9debfe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045553366s May 11 13:24:07.436: INFO: Pod "pod-projected-configmaps-10d51743-7324-4232-b10c-ebe7b9debfe9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140656491s May 11 13:24:09.439: INFO: Pod "pod-projected-configmaps-10d51743-7324-4232-b10c-ebe7b9debfe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143407983s STEP: Saw pod success May 11 13:24:09.439: INFO: Pod "pod-projected-configmaps-10d51743-7324-4232-b10c-ebe7b9debfe9" satisfied condition "success or failure" May 11 13:24:09.441: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-10d51743-7324-4232-b10c-ebe7b9debfe9 container projected-configmap-volume-test: STEP: delete the pod May 11 13:24:09.481: INFO: Waiting for pod pod-projected-configmaps-10d51743-7324-4232-b10c-ebe7b9debfe9 to disappear May 11 13:24:09.491: INFO: Pod pod-projected-configmaps-10d51743-7324-4232-b10c-ebe7b9debfe9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:24:09.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3260" for this suite. May 11 13:24:15.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:24:15.593: INFO: namespace projected-3260 deletion completed in 6.096766766s • [SLOW TEST:12.363 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:24:15.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-0e3490ef-79c2-4f76-ba4e-00b9079d30d4 STEP: Creating a pod to test consume configMaps May 11 13:24:15.691: INFO: Waiting up to 5m0s for pod "pod-configmaps-95e45683-0a3b-4c08-a4f0-9bb0d0ab3ad1" in namespace "configmap-7839" to be "success or failure" May 11 13:24:15.694: INFO: Pod "pod-configmaps-95e45683-0a3b-4c08-a4f0-9bb0d0ab3ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.054377ms May 11 13:24:17.700: INFO: Pod "pod-configmaps-95e45683-0a3b-4c08-a4f0-9bb0d0ab3ad1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008796483s May 11 13:24:19.703: INFO: Pod "pod-configmaps-95e45683-0a3b-4c08-a4f0-9bb0d0ab3ad1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012190663s STEP: Saw pod success May 11 13:24:19.703: INFO: Pod "pod-configmaps-95e45683-0a3b-4c08-a4f0-9bb0d0ab3ad1" satisfied condition "success or failure" May 11 13:24:19.705: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-95e45683-0a3b-4c08-a4f0-9bb0d0ab3ad1 container configmap-volume-test: STEP: delete the pod May 11 13:24:19.756: INFO: Waiting for pod pod-configmaps-95e45683-0a3b-4c08-a4f0-9bb0d0ab3ad1 to disappear May 11 13:24:19.807: INFO: Pod pod-configmaps-95e45683-0a3b-4c08-a4f0-9bb0d0ab3ad1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:24:19.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7839" for this suite. May 11 13:24:25.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:24:25.891: INFO: namespace configmap-7839 deletion completed in 6.080668038s • [SLOW TEST:10.297 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:24:25.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 13:24:25.959: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b380defb-47ef-4c30-9b6e-6c93dcad6371" in namespace "downward-api-6991" to be "success or failure" May 11 13:24:26.018: INFO: Pod "downwardapi-volume-b380defb-47ef-4c30-9b6e-6c93dcad6371": Phase="Pending", Reason="", readiness=false. Elapsed: 58.397758ms May 11 13:24:28.021: INFO: Pod "downwardapi-volume-b380defb-47ef-4c30-9b6e-6c93dcad6371": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061886334s May 11 13:24:30.025: INFO: Pod "downwardapi-volume-b380defb-47ef-4c30-9b6e-6c93dcad6371": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065280065s STEP: Saw pod success May 11 13:24:30.025: INFO: Pod "downwardapi-volume-b380defb-47ef-4c30-9b6e-6c93dcad6371" satisfied condition "success or failure" May 11 13:24:30.027: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b380defb-47ef-4c30-9b6e-6c93dcad6371 container client-container: STEP: delete the pod May 11 13:24:30.172: INFO: Waiting for pod downwardapi-volume-b380defb-47ef-4c30-9b6e-6c93dcad6371 to disappear May 11 13:24:30.204: INFO: Pod downwardapi-volume-b380defb-47ef-4c30-9b6e-6c93dcad6371 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:24:30.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6991" for this suite. May 11 13:24:36.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:24:36.346: INFO: namespace downward-api-6991 deletion completed in 6.138957943s • [SLOW TEST:10.454 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:24:36.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-17382c18-b5a5-4255-99f7-18a89704bb9b STEP: Creating a pod to test consume configMaps May 11 13:24:36.448: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72597715-a370-4885-8a28-c27e609482cf" in namespace "projected-5440" to be "success or failure" May 11 13:24:36.450: INFO: Pod "pod-projected-configmaps-72597715-a370-4885-8a28-c27e609482cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142856ms May 11 13:24:38.455: INFO: Pod "pod-projected-configmaps-72597715-a370-4885-8a28-c27e609482cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006462815s May 11 13:24:40.459: INFO: Pod "pod-projected-configmaps-72597715-a370-4885-8a28-c27e609482cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010687945s May 11 13:24:42.463: INFO: Pod "pod-projected-configmaps-72597715-a370-4885-8a28-c27e609482cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014832217s STEP: Saw pod success May 11 13:24:42.463: INFO: Pod "pod-projected-configmaps-72597715-a370-4885-8a28-c27e609482cf" satisfied condition "success or failure" May 11 13:24:42.466: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-72597715-a370-4885-8a28-c27e609482cf container projected-configmap-volume-test: STEP: delete the pod May 11 13:24:42.577: INFO: Waiting for pod pod-projected-configmaps-72597715-a370-4885-8a28-c27e609482cf to disappear May 11 13:24:42.623: INFO: Pod pod-projected-configmaps-72597715-a370-4885-8a28-c27e609482cf no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:24:42.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5440" for this suite. May 11 13:24:50.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:24:50.758: INFO: namespace projected-5440 deletion completed in 8.129893713s • [SLOW TEST:14.411 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:24:50.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 13:24:50.832: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b7ea44a-e45c-437b-a0d8-6e6f963ce826" in namespace "projected-5906" to be "success or failure" May 11 13:24:50.846: INFO: Pod "downwardapi-volume-0b7ea44a-e45c-437b-a0d8-6e6f963ce826": Phase="Pending", Reason="", readiness=false. Elapsed: 14.118677ms May 11 13:24:52.850: INFO: Pod "downwardapi-volume-0b7ea44a-e45c-437b-a0d8-6e6f963ce826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017905647s May 11 13:24:54.854: INFO: Pod "downwardapi-volume-0b7ea44a-e45c-437b-a0d8-6e6f963ce826": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02188284s STEP: Saw pod success May 11 13:24:54.854: INFO: Pod "downwardapi-volume-0b7ea44a-e45c-437b-a0d8-6e6f963ce826" satisfied condition "success or failure" May 11 13:24:54.856: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0b7ea44a-e45c-437b-a0d8-6e6f963ce826 container client-container: STEP: delete the pod May 11 13:24:54.907: INFO: Waiting for pod downwardapi-volume-0b7ea44a-e45c-437b-a0d8-6e6f963ce826 to disappear May 11 13:24:55.018: INFO: Pod downwardapi-volume-0b7ea44a-e45c-437b-a0d8-6e6f963ce826 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:24:55.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5906" for this suite. May 11 13:25:01.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:25:01.172: INFO: namespace projected-5906 deletion completed in 6.150653594s • [SLOW TEST:10.414 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:25:01.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 13:25:01.334: INFO: Create a RollingUpdate DaemonSet May 11 13:25:01.337: INFO: Check that daemon pods launch on every node of the cluster May 11 13:25:01.390: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:25:01.396: INFO: Number of nodes with available pods: 0 May 11 13:25:01.396: INFO: Node iruya-worker is running more than one daemon pod May 11 13:25:02.400: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:25:02.403: INFO: Number of nodes with available pods: 0 May 11 13:25:02.403: INFO: Node iruya-worker is running more than one daemon pod May 11 13:25:03.399: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:25:03.401: INFO: Number of nodes with available pods: 0 May 11 13:25:03.401: INFO: Node iruya-worker is running more than one daemon pod May 11 13:25:04.404: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:25:04.451: INFO: Number of nodes with available pods: 0 May 11 13:25:04.451: INFO: Node iruya-worker is running more than one daemon pod May 11 13:25:05.522: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:25:05.570: INFO: Number of nodes with available pods: 0 May 11 13:25:05.571: INFO: Node iruya-worker is running more than one daemon pod May 11 13:25:06.399: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:25:06.402: INFO: Number of nodes with available pods: 1 May 11 13:25:06.402: INFO: Node iruya-worker is running more than one daemon pod May 11 13:25:07.400: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:25:07.403: INFO: Number of nodes with available pods: 2 May 11 13:25:07.403: INFO: Number of running nodes: 2, number of available pods: 2 May 11 13:25:07.403: INFO: Update the DaemonSet to trigger a rollout May 11 13:25:07.409: INFO: Updating DaemonSet daemon-set May 11 13:25:22.671: INFO: Roll back the DaemonSet before rollout is complete May 11 13:25:22.676: INFO: Updating DaemonSet daemon-set May 11 13:25:22.676: INFO: Make sure DaemonSet rollback is complete May 11 13:25:22.706: INFO: Wrong image for pod: daemon-set-t6jxp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 11 13:25:22.707: INFO: Pod daemon-set-t6jxp is not available May 11 13:25:22.815: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:25:23.819: INFO: Wrong image for pod: daemon-set-t6jxp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 11 13:25:23.819: INFO: Pod daemon-set-t6jxp is not available May 11 13:25:23.822: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:25:24.859: INFO: Wrong image for pod: daemon-set-t6jxp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 11 13:25:24.859: INFO: Pod daemon-set-t6jxp is not available May 11 13:25:24.863: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:25:25.977: INFO: Wrong image for pod: daemon-set-t6jxp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 11 13:25:25.977: INFO: Pod daemon-set-t6jxp is not available May 11 13:25:25.981: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:25:26.935: INFO: Wrong image for pod: daemon-set-t6jxp. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 11 13:25:26.935: INFO: Pod daemon-set-t6jxp is not available May 11 13:25:26.949: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 13:25:27.820: INFO: Pod daemon-set-cbwcv is not available May 11 13:25:27.824: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-564, will wait for the garbage collector to delete the pods May 11 13:25:27.889: INFO: Deleting DaemonSet.extensions daemon-set took: 7.084984ms May 11 13:25:29.290: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.400224999s May 11 13:25:32.900: INFO: Number of nodes with available pods: 0 May 11 13:25:32.900: INFO: Number of running nodes: 0, number of available pods: 0 May 11 13:25:32.903: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-564/daemonsets","resourceVersion":"10250421"},"items":null} May 11 13:25:32.905: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-564/pods","resourceVersion":"10250421"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:25:32.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-564" for this suite. May 11 13:25:40.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:25:41.040: INFO: namespace daemonsets-564 deletion completed in 8.127890764s • [SLOW TEST:39.867 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:25:41.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 11 13:25:41.348: INFO: Waiting up to 5m0s for pod "downward-api-3d3bcfe2-f601-4d8a-b662-eda65bd46644" in namespace "downward-api-5343" to be "success or failure" May 11 13:25:41.351: INFO: Pod "downward-api-3d3bcfe2-f601-4d8a-b662-eda65bd46644": Phase="Pending", Reason="", readiness=false. Elapsed: 2.708019ms May 11 13:25:43.356: INFO: Pod "downward-api-3d3bcfe2-f601-4d8a-b662-eda65bd46644": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007272191s May 11 13:25:45.839: INFO: Pod "downward-api-3d3bcfe2-f601-4d8a-b662-eda65bd46644": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490975776s May 11 13:25:48.540: INFO: Pod "downward-api-3d3bcfe2-f601-4d8a-b662-eda65bd46644": Phase="Pending", Reason="", readiness=false. Elapsed: 7.191321048s May 11 13:25:50.544: INFO: Pod "downward-api-3d3bcfe2-f601-4d8a-b662-eda65bd46644": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.195408154s STEP: Saw pod success May 11 13:25:50.544: INFO: Pod "downward-api-3d3bcfe2-f601-4d8a-b662-eda65bd46644" satisfied condition "success or failure" May 11 13:25:50.547: INFO: Trying to get logs from node iruya-worker pod downward-api-3d3bcfe2-f601-4d8a-b662-eda65bd46644 container dapi-container: STEP: delete the pod May 11 13:25:50.839: INFO: Waiting for pod downward-api-3d3bcfe2-f601-4d8a-b662-eda65bd46644 to disappear May 11 13:25:50.842: INFO: Pod downward-api-3d3bcfe2-f601-4d8a-b662-eda65bd46644 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:25:50.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5343" for this suite. May 11 13:25:56.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:25:56.967: INFO: namespace downward-api-5343 deletion completed in 6.120486775s • [SLOW TEST:15.926 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:25:56.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 13:25:57.189: INFO: Waiting up to 5m0s for pod "pod-a498e676-870a-4dc1-991f-712aba59b42f" in namespace "emptydir-9305" to be "success or failure" May 11 13:25:57.194: INFO: Pod "pod-a498e676-870a-4dc1-991f-712aba59b42f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.995616ms May 11 13:25:59.243: INFO: Pod "pod-a498e676-870a-4dc1-991f-712aba59b42f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054227091s May 11 13:26:01.247: INFO: Pod "pod-a498e676-870a-4dc1-991f-712aba59b42f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057622278s May 11 13:26:03.250: INFO: Pod "pod-a498e676-870a-4dc1-991f-712aba59b42f": Phase="Running", Reason="", readiness=true. Elapsed: 6.061070626s May 11 13:26:05.254: INFO: Pod "pod-a498e676-870a-4dc1-991f-712aba59b42f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065045521s STEP: Saw pod success May 11 13:26:05.254: INFO: Pod "pod-a498e676-870a-4dc1-991f-712aba59b42f" satisfied condition "success or failure" May 11 13:26:05.256: INFO: Trying to get logs from node iruya-worker2 pod pod-a498e676-870a-4dc1-991f-712aba59b42f container test-container: STEP: delete the pod May 11 13:26:05.408: INFO: Waiting for pod pod-a498e676-870a-4dc1-991f-712aba59b42f to disappear May 11 13:26:05.412: INFO: Pod pod-a498e676-870a-4dc1-991f-712aba59b42f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:26:05.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9305" for this suite. May 11 13:26:11.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:26:11.634: INFO: namespace emptydir-9305 deletion completed in 6.217613047s • [SLOW TEST:14.666 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:26:11.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 11 13:26:17.934: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-ce8fe890-a1e3-4af8-972a-16c5568a103d,GenerateName:,Namespace:events-7518,SelfLink:/api/v1/namespaces/events-7518/pods/send-events-ce8fe890-a1e3-4af8-972a-16c5568a103d,UID:4a45b9da-4dc5-4ae4-8b12-b159b028b62e,ResourceVersion:10250590,Generation:0,CreationTimestamp:2020-05-11 13:26:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 783642066,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-m6mws {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m6mws,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-m6mws true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001abc7f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001abc810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:26:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:26:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:26:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 13:26:11 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.174,StartTime:2020-05-11 13:26:11 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-11 13:26:16 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://67c6b4cf67b8c05ffe50b658ea60c636c33178ff815eebee1518c73eb4aa9815}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 11 13:26:19.939: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 11 13:26:21.943: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:26:21.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7518" for this suite. May 11 13:27:02.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:27:02.132: INFO: namespace events-7518 deletion completed in 40.176505893s • [SLOW TEST:50.498 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:27:02.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 13:27:02.323: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.885446ms) May 11 13:27:02.325: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.878883ms) May 11 13:27:02.328: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.231359ms) May 11 13:27:02.330: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.902152ms) May 11 13:27:02.332: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.131328ms) May 11 13:27:02.334: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.948911ms) May 11 13:27:02.336: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.066424ms) May 11 13:27:02.338: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.057236ms) May 11 13:27:02.340: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.7683ms) May 11 13:27:02.342: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.112715ms) May 11 13:27:02.344: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.036131ms) May 11 13:27:02.346: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.336873ms) May 11 13:27:02.349: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.270799ms) May 11 13:27:02.351: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.447535ms) May 11 13:27:02.354: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.830772ms) May 11 13:27:02.356: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.046443ms) May 11 13:27:02.358: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.288052ms) May 11 13:27:02.360: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.798411ms) May 11 13:27:02.362: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.279261ms) May 11 13:27:02.365: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.170474ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:27:02.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5459" for this suite. May 11 13:27:08.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:27:08.598: INFO: namespace proxy-5459 deletion completed in 6.141376658s • [SLOW TEST:6.466 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:27:08.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 13:27:23.135: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 13:27:23.197: INFO: Pod pod-with-prestop-http-hook still exists May 11 13:27:25.197: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 13:27:25.200: INFO: Pod pod-with-prestop-http-hook still exists May 11 13:27:27.197: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 13:27:27.201: INFO: Pod pod-with-prestop-http-hook still exists May 11 13:27:29.197: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 13:27:29.235: INFO: Pod pod-with-prestop-http-hook still exists May 11 13:27:31.197: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 13:27:31.201: INFO: Pod pod-with-prestop-http-hook still exists May 11 13:27:33.197: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 13:27:33.201: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:27:33.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2968" for this suite. May 11 13:27:59.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:27:59.437: INFO: namespace container-lifecycle-hook-2968 deletion completed in 26.226638063s • [SLOW TEST:50.837 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:27:59.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 11 13:27:59.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2491' May 11 13:27:59.891: INFO: stderr: "" May 11 13:27:59.891: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 11 13:28:00.896: INFO: Selector matched 1 pods for map[app:redis] May 11 13:28:00.896: INFO: Found 0 / 1 May 11 13:28:01.925: INFO: Selector matched 1 pods for map[app:redis] May 11 13:28:01.925: INFO: Found 0 / 1 May 11 13:28:03.057: INFO: Selector matched 1 pods for map[app:redis] May 11 13:28:03.057: INFO: Found 0 / 1 May 11 13:28:03.895: INFO: Selector matched 1 pods for map[app:redis] May 11 13:28:03.895: INFO: Found 0 / 1 May 11 13:28:04.897: INFO: Selector matched 1 pods for map[app:redis] May 11 13:28:04.897: INFO: Found 1 / 1 May 11 13:28:04.897: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 13:28:04.925: INFO: Selector matched 1 pods for map[app:redis] May 11 13:28:04.925: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 11 13:28:04.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qbc2n redis-master --namespace=kubectl-2491' May 11 13:28:05.035: INFO: stderr: "" May 11 13:28:05.035: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 May 13:28:04.675 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 May 13:28:04.675 # Server started, Redis version 3.2.12\n1:M 11 May 13:28:04.676 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 May 13:28:04.676 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 11 13:28:05.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qbc2n redis-master --namespace=kubectl-2491 --tail=1' May 11 13:28:05.165: INFO: stderr: "" May 11 13:28:05.166: INFO: stdout: "1:M 11 May 13:28:04.676 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 11 13:28:05.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qbc2n redis-master --namespace=kubectl-2491 --limit-bytes=1' May 11 13:28:05.256: INFO: stderr: "" May 11 13:28:05.256: INFO: stdout: " " STEP: exposing timestamps May 11 13:28:05.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qbc2n redis-master --namespace=kubectl-2491 --tail=1 --timestamps' May 11 13:28:05.367: INFO: stderr: "" May 11 13:28:05.367: INFO: stdout: "2020-05-11T13:28:04.676148057Z 1:M 11 May 13:28:04.676 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 11 13:28:07.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qbc2n redis-master --namespace=kubectl-2491 --since=1s' May 11 13:28:07.952: INFO: stderr: "" May 11 13:28:07.952: INFO: stdout: "" May 11 13:28:07.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qbc2n redis-master --namespace=kubectl-2491 --since=24h' May 11 13:28:08.048: INFO: stderr: "" May 11 13:28:08.048: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 May 13:28:04.675 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 May 13:28:04.675 # Server started, Redis version 3.2.12\n1:M 11 May 13:28:04.676 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 May 13:28:04.676 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 11 13:28:08.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2491' May 11 13:28:08.156: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 13:28:08.156: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 11 13:28:08.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2491' May 11 13:28:08.250: INFO: stderr: "No resources found.\n" May 11 13:28:08.250: INFO: stdout: "" May 11 13:28:08.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2491 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 13:28:08.331: INFO: stderr: "" May 11 13:28:08.331: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:28:08.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2491" for this suite. May 11 13:28:14.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:28:14.523: INFO: namespace kubectl-2491 deletion completed in 6.188800247s • [SLOW TEST:15.086 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:28:14.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 13:28:14.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 11 13:28:15.137: INFO: stderr: "" May 11 13:28:15.137: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:28:15.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-985" for this suite. May 11 13:28:21.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:28:21.482: INFO: namespace kubectl-985 deletion completed in 6.304494366s • [SLOW TEST:6.958 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:28:21.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 11 13:28:30.318: INFO: Successfully updated pod "annotationupdate6ca6e14e-a8ae-4095-b6e2-79294a4e7044" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:28:34.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3805" for this suite. May 11 13:28:58.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:28:58.545: INFO: namespace projected-3805 deletion completed in 24.121344396s • [SLOW TEST:37.063 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:28:58.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 11 13:29:07.473: INFO: Successfully updated pod "labelsupdate52ab87f2-1d70-4c1f-8ad1-65bac03a2554" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:29:09.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-516" for this suite. May 11 13:29:33.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:29:33.621: INFO: namespace projected-516 deletion completed in 24.112005786s • [SLOW TEST:35.076 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:29:33.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-53becfff-6b43-4358-b044-6646fcfc59eb STEP: Creating secret with name secret-projected-all-test-volume-2836524b-ab68-4466-94fc-6a6e0fe9dd42 STEP: Creating a pod to test Check all projections for projected volume plugin May 11 13:29:34.015: INFO: Waiting up to 5m0s for pod "projected-volume-8422cc50-fb4a-46de-879e-b046a0631428" in namespace "projected-5909" to be "success or failure" May 11 13:29:34.026: INFO: Pod "projected-volume-8422cc50-fb4a-46de-879e-b046a0631428": Phase="Pending", Reason="", readiness=false. Elapsed: 10.53806ms May 11 13:29:36.094: INFO: Pod "projected-volume-8422cc50-fb4a-46de-879e-b046a0631428": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079178199s May 11 13:29:38.370: INFO: Pod "projected-volume-8422cc50-fb4a-46de-879e-b046a0631428": Phase="Pending", Reason="", readiness=false. Elapsed: 4.354636162s May 11 13:29:40.693: INFO: Pod "projected-volume-8422cc50-fb4a-46de-879e-b046a0631428": Phase="Pending", Reason="", readiness=false. Elapsed: 6.677842989s May 11 13:29:42.696: INFO: Pod "projected-volume-8422cc50-fb4a-46de-879e-b046a0631428": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.68138986s STEP: Saw pod success May 11 13:29:42.696: INFO: Pod "projected-volume-8422cc50-fb4a-46de-879e-b046a0631428" satisfied condition "success or failure" May 11 13:29:42.699: INFO: Trying to get logs from node iruya-worker pod projected-volume-8422cc50-fb4a-46de-879e-b046a0631428 container projected-all-volume-test: STEP: delete the pod May 11 13:29:42.752: INFO: Waiting for pod projected-volume-8422cc50-fb4a-46de-879e-b046a0631428 to disappear May 11 13:29:42.806: INFO: Pod projected-volume-8422cc50-fb4a-46de-879e-b046a0631428 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:29:42.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5909" for this suite. May 11 13:29:50.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:29:51.071: INFO: namespace projected-5909 deletion completed in 8.260939487s • [SLOW TEST:17.450 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:29:51.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 11 13:29:51.452: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1612,SelfLink:/api/v1/namespaces/watch-1612/configmaps/e2e-watch-test-configmap-a,UID:b03f2088-afe3-4199-916b-2ae044c243af,ResourceVersion:10251194,Generation:0,CreationTimestamp:2020-05-11 13:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 13:29:51.452: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1612,SelfLink:/api/v1/namespaces/watch-1612/configmaps/e2e-watch-test-configmap-a,UID:b03f2088-afe3-4199-916b-2ae044c243af,ResourceVersion:10251194,Generation:0,CreationTimestamp:2020-05-11 13:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 11 13:30:01.461: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1612,SelfLink:/api/v1/namespaces/watch-1612/configmaps/e2e-watch-test-configmap-a,UID:b03f2088-afe3-4199-916b-2ae044c243af,ResourceVersion:10251214,Generation:0,CreationTimestamp:2020-05-11 13:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 11 13:30:01.461: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1612,SelfLink:/api/v1/namespaces/watch-1612/configmaps/e2e-watch-test-configmap-a,UID:b03f2088-afe3-4199-916b-2ae044c243af,ResourceVersion:10251214,Generation:0,CreationTimestamp:2020-05-11 13:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 11 13:30:11.471: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1612,SelfLink:/api/v1/namespaces/watch-1612/configmaps/e2e-watch-test-configmap-a,UID:b03f2088-afe3-4199-916b-2ae044c243af,ResourceVersion:10251235,Generation:0,CreationTimestamp:2020-05-11 13:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 13:30:11.471: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1612,SelfLink:/api/v1/namespaces/watch-1612/configmaps/e2e-watch-test-configmap-a,UID:b03f2088-afe3-4199-916b-2ae044c243af,ResourceVersion:10251235,Generation:0,CreationTimestamp:2020-05-11 13:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 11 13:30:21.509: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1612,SelfLink:/api/v1/namespaces/watch-1612/configmaps/e2e-watch-test-configmap-a,UID:b03f2088-afe3-4199-916b-2ae044c243af,ResourceVersion:10251256,Generation:0,CreationTimestamp:2020-05-11 13:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 13:30:21.509: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1612,SelfLink:/api/v1/namespaces/watch-1612/configmaps/e2e-watch-test-configmap-a,UID:b03f2088-afe3-4199-916b-2ae044c243af,ResourceVersion:10251256,Generation:0,CreationTimestamp:2020-05-11 13:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 11 13:30:31.516: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1612,SelfLink:/api/v1/namespaces/watch-1612/configmaps/e2e-watch-test-configmap-b,UID:c369b3b9-9730-47be-a8e1-bbd5e4f19c6f,ResourceVersion:10251277,Generation:0,CreationTimestamp:2020-05-11 13:30:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 13:30:31.516: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1612,SelfLink:/api/v1/namespaces/watch-1612/configmaps/e2e-watch-test-configmap-b,UID:c369b3b9-9730-47be-a8e1-bbd5e4f19c6f,ResourceVersion:10251277,Generation:0,CreationTimestamp:2020-05-11 13:30:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 11 13:30:41.558: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1612,SelfLink:/api/v1/namespaces/watch-1612/configmaps/e2e-watch-test-configmap-b,UID:c369b3b9-9730-47be-a8e1-bbd5e4f19c6f,ResourceVersion:10251297,Generation:0,CreationTimestamp:2020-05-11 13:30:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 13:30:41.558: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1612,SelfLink:/api/v1/namespaces/watch-1612/configmaps/e2e-watch-test-configmap-b,UID:c369b3b9-9730-47be-a8e1-bbd5e4f19c6f,ResourceVersion:10251297,Generation:0,CreationTimestamp:2020-05-11 13:30:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:30:51.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1612" for this suite. May 11 13:30:57.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:30:57.745: INFO: namespace watch-1612 deletion completed in 6.128584816s • [SLOW TEST:66.674 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:30:57.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 13:30:57.929: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c424480e-f698-4a84-85ad-4bcb3d17286c" in namespace "downward-api-2225" to be "success or failure" May 11 13:30:57.950: INFO: Pod "downwardapi-volume-c424480e-f698-4a84-85ad-4bcb3d17286c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.512607ms May 11 13:30:59.987: INFO: Pod "downwardapi-volume-c424480e-f698-4a84-85ad-4bcb3d17286c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057671844s May 11 13:31:02.209: INFO: Pod "downwardapi-volume-c424480e-f698-4a84-85ad-4bcb3d17286c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.279280175s May 11 13:31:04.676: INFO: Pod "downwardapi-volume-c424480e-f698-4a84-85ad-4bcb3d17286c": Phase="Running", Reason="", readiness=true. Elapsed: 6.747221316s May 11 13:31:06.680: INFO: Pod "downwardapi-volume-c424480e-f698-4a84-85ad-4bcb3d17286c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.751145665s STEP: Saw pod success May 11 13:31:06.680: INFO: Pod "downwardapi-volume-c424480e-f698-4a84-85ad-4bcb3d17286c" satisfied condition "success or failure" May 11 13:31:06.684: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c424480e-f698-4a84-85ad-4bcb3d17286c container client-container: STEP: delete the pod May 11 13:31:06.809: INFO: Waiting for pod downwardapi-volume-c424480e-f698-4a84-85ad-4bcb3d17286c to disappear May 11 13:31:06.870: INFO: Pod downwardapi-volume-c424480e-f698-4a84-85ad-4bcb3d17286c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:31:06.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2225" for this suite. May 11 13:31:14.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:31:15.131: INFO: namespace downward-api-2225 deletion completed in 8.257138586s • [SLOW TEST:17.386 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:31:15.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 11 13:31:15.462: INFO: Waiting up to 5m0s for pod "client-containers-69489171-55bc-4d31-8c04-b6e17725b63d" in namespace "containers-3151" to be "success or failure" May 11 13:31:15.474: INFO: Pod "client-containers-69489171-55bc-4d31-8c04-b6e17725b63d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.053572ms May 11 13:31:17.575: INFO: Pod "client-containers-69489171-55bc-4d31-8c04-b6e17725b63d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113036836s May 11 13:31:19.659: INFO: Pod "client-containers-69489171-55bc-4d31-8c04-b6e17725b63d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196778924s May 11 13:31:21.661: INFO: Pod "client-containers-69489171-55bc-4d31-8c04-b6e17725b63d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.199152764s STEP: Saw pod success May 11 13:31:21.661: INFO: Pod "client-containers-69489171-55bc-4d31-8c04-b6e17725b63d" satisfied condition "success or failure" May 11 13:31:21.663: INFO: Trying to get logs from node iruya-worker pod client-containers-69489171-55bc-4d31-8c04-b6e17725b63d container test-container: STEP: delete the pod May 11 13:31:21.809: INFO: Waiting for pod client-containers-69489171-55bc-4d31-8c04-b6e17725b63d to disappear May 11 13:31:21.834: INFO: Pod client-containers-69489171-55bc-4d31-8c04-b6e17725b63d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:31:21.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3151" for this suite. May 11 13:31:28.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:31:28.380: INFO: namespace containers-3151 deletion completed in 6.542547826s • [SLOW TEST:13.248 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:31:28.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:32:09.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1283" for this suite. May 11 13:32:18.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:32:18.174: INFO: namespace container-runtime-1283 deletion completed in 8.362179159s • [SLOW TEST:49.795 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:32:18.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-9d620bdd-8fd5-4722-b9de-2954c313300a STEP: Creating a pod to test consume configMaps May 11 13:32:18.816: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e0bc7a3-c16b-467b-abf4-17f60ebe4a11" in namespace "configmap-2753" to be "success or failure" May 11 13:32:18.922: INFO: Pod "pod-configmaps-5e0bc7a3-c16b-467b-abf4-17f60ebe4a11": Phase="Pending", Reason="", readiness=false. Elapsed: 106.506983ms May 11 13:32:20.926: INFO: Pod "pod-configmaps-5e0bc7a3-c16b-467b-abf4-17f60ebe4a11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110223349s May 11 13:32:22.931: INFO: Pod "pod-configmaps-5e0bc7a3-c16b-467b-abf4-17f60ebe4a11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115013624s May 11 13:32:25.096: INFO: Pod "pod-configmaps-5e0bc7a3-c16b-467b-abf4-17f60ebe4a11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.280421789s May 11 13:32:27.100: INFO: Pod "pod-configmaps-5e0bc7a3-c16b-467b-abf4-17f60ebe4a11": Phase="Running", Reason="", readiness=true. Elapsed: 8.284535264s May 11 13:32:29.145: INFO: Pod "pod-configmaps-5e0bc7a3-c16b-467b-abf4-17f60ebe4a11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.328957928s STEP: Saw pod success May 11 13:32:29.145: INFO: Pod "pod-configmaps-5e0bc7a3-c16b-467b-abf4-17f60ebe4a11" satisfied condition "success or failure" May 11 13:32:29.148: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-5e0bc7a3-c16b-467b-abf4-17f60ebe4a11 container configmap-volume-test: STEP: delete the pod May 11 13:32:29.330: INFO: Waiting for pod pod-configmaps-5e0bc7a3-c16b-467b-abf4-17f60ebe4a11 to disappear May 11 13:32:29.332: INFO: Pod pod-configmaps-5e0bc7a3-c16b-467b-abf4-17f60ebe4a11 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:32:29.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2753" for this suite. May 11 13:32:37.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:32:37.435: INFO: namespace configmap-2753 deletion completed in 8.100294375s • [SLOW TEST:19.260 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:32:37.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-44dafc8b-f06b-4fae-8049-6315b29288af STEP: Creating a pod to test consume configMaps May 11 13:32:37.809: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9440e4d-3716-4f97-b035-ab67c24d9023" in namespace "configmap-1979" to be "success or failure" May 11 13:32:37.846: INFO: Pod "pod-configmaps-e9440e4d-3716-4f97-b035-ab67c24d9023": Phase="Pending", Reason="", readiness=false. Elapsed: 36.774011ms May 11 13:32:39.851: INFO: Pod "pod-configmaps-e9440e4d-3716-4f97-b035-ab67c24d9023": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041536876s May 11 13:32:41.854: INFO: Pod "pod-configmaps-e9440e4d-3716-4f97-b035-ab67c24d9023": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045109762s May 11 13:32:43.875: INFO: Pod "pod-configmaps-e9440e4d-3716-4f97-b035-ab67c24d9023": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065615545s STEP: Saw pod success May 11 13:32:43.875: INFO: Pod "pod-configmaps-e9440e4d-3716-4f97-b035-ab67c24d9023" satisfied condition "success or failure" May 11 13:32:43.877: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e9440e4d-3716-4f97-b035-ab67c24d9023 container configmap-volume-test: STEP: delete the pod May 11 13:32:43.921: INFO: Waiting for pod pod-configmaps-e9440e4d-3716-4f97-b035-ab67c24d9023 to disappear May 11 13:32:43.956: INFO: Pod pod-configmaps-e9440e4d-3716-4f97-b035-ab67c24d9023 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:32:43.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1979" for this suite. May 11 13:32:50.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:32:50.987: INFO: namespace configmap-1979 deletion completed in 7.027394935s • [SLOW TEST:13.551 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:32:50.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 11 13:32:51.311: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1924,SelfLink:/api/v1/namespaces/watch-1924/configmaps/e2e-watch-test-label-changed,UID:ae05bbb1-6920-47de-aa3f-ae99d3c9a5eb,ResourceVersion:10251723,Generation:0,CreationTimestamp:2020-05-11 13:32:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 13:32:51.311: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1924,SelfLink:/api/v1/namespaces/watch-1924/configmaps/e2e-watch-test-label-changed,UID:ae05bbb1-6920-47de-aa3f-ae99d3c9a5eb,ResourceVersion:10251724,Generation:0,CreationTimestamp:2020-05-11 13:32:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 11 13:32:51.311: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1924,SelfLink:/api/v1/namespaces/watch-1924/configmaps/e2e-watch-test-label-changed,UID:ae05bbb1-6920-47de-aa3f-ae99d3c9a5eb,ResourceVersion:10251725,Generation:0,CreationTimestamp:2020-05-11 13:32:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 11 13:33:01.817: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1924,SelfLink:/api/v1/namespaces/watch-1924/configmaps/e2e-watch-test-label-changed,UID:ae05bbb1-6920-47de-aa3f-ae99d3c9a5eb,ResourceVersion:10251746,Generation:0,CreationTimestamp:2020-05-11 13:32:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 13:33:01.817: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1924,SelfLink:/api/v1/namespaces/watch-1924/configmaps/e2e-watch-test-label-changed,UID:ae05bbb1-6920-47de-aa3f-ae99d3c9a5eb,ResourceVersion:10251748,Generation:0,CreationTimestamp:2020-05-11 13:32:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 11 13:33:01.818: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1924,SelfLink:/api/v1/namespaces/watch-1924/configmaps/e2e-watch-test-label-changed,UID:ae05bbb1-6920-47de-aa3f-ae99d3c9a5eb,ResourceVersion:10251749,Generation:0,CreationTimestamp:2020-05-11 13:32:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:33:01.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1924" for this suite. May 11 13:33:08.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:33:08.203: INFO: namespace watch-1924 deletion completed in 6.255927498s • [SLOW TEST:17.216 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:33:08.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-eb22cfc3-eeb1-4a22-a1ae-592155e33251 STEP: Creating configMap with name cm-test-opt-upd-483bd59c-a803-4b15-a82a-9ba0383f924e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-eb22cfc3-eeb1-4a22-a1ae-592155e33251 STEP: Updating configmap cm-test-opt-upd-483bd59c-a803-4b15-a82a-9ba0383f924e STEP: Creating configMap with name cm-test-opt-create-c81cb6ac-855f-4be6-984c-e2af3c10d539 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:34:42.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2388" for this suite. May 11 13:35:06.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:35:06.533: INFO: namespace projected-2388 deletion completed in 24.167639717s • [SLOW TEST:118.330 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:35:06.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 13:35:06.822: INFO: Waiting up to 5m0s for pod "pod-c7f9d052-563a-48a6-8fa0-b766de1eb881" in namespace "emptydir-143" to be "success or failure" May 11 13:35:06.850: INFO: Pod "pod-c7f9d052-563a-48a6-8fa0-b766de1eb881": Phase="Pending", Reason="", readiness=false. Elapsed: 28.066304ms May 11 13:35:09.114: INFO: Pod "pod-c7f9d052-563a-48a6-8fa0-b766de1eb881": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291627293s May 11 13:35:11.752: INFO: Pod "pod-c7f9d052-563a-48a6-8fa0-b766de1eb881": Phase="Pending", Reason="", readiness=false. Elapsed: 4.92969598s May 11 13:35:13.756: INFO: Pod "pod-c7f9d052-563a-48a6-8fa0-b766de1eb881": Phase="Pending", Reason="", readiness=false. Elapsed: 6.934160418s May 11 13:35:15.759: INFO: Pod "pod-c7f9d052-563a-48a6-8fa0-b766de1eb881": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.937535798s STEP: Saw pod success May 11 13:35:15.760: INFO: Pod "pod-c7f9d052-563a-48a6-8fa0-b766de1eb881" satisfied condition "success or failure" May 11 13:35:15.762: INFO: Trying to get logs from node iruya-worker2 pod pod-c7f9d052-563a-48a6-8fa0-b766de1eb881 container test-container: STEP: delete the pod May 11 13:35:15.947: INFO: Waiting for pod pod-c7f9d052-563a-48a6-8fa0-b766de1eb881 to disappear May 11 13:35:15.951: INFO: Pod pod-c7f9d052-563a-48a6-8fa0-b766de1eb881 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:35:15.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-143" for this suite. May 11 13:35:22.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:35:22.123: INFO: namespace emptydir-143 deletion completed in 6.168232525s • [SLOW TEST:15.589 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:35:22.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6475 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 13:35:22.346: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 13:35:58.555: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.183 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6475 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 13:35:58.555: INFO: >>> kubeConfig: /root/.kube/config I0511 13:35:58.579142 7 log.go:172] (0xc0018509a0) (0xc0015b30e0) Create stream I0511 13:35:58.579163 7 log.go:172] (0xc0018509a0) (0xc0015b30e0) Stream added, broadcasting: 1 I0511 13:35:58.580954 7 log.go:172] (0xc0018509a0) Reply frame received for 1 I0511 13:35:58.580986 7 log.go:172] (0xc0018509a0) (0xc001e903c0) Create stream I0511 13:35:58.580998 7 log.go:172] (0xc0018509a0) (0xc001e903c0) Stream added, broadcasting: 3 I0511 13:35:58.581983 7 log.go:172] (0xc0018509a0) Reply frame received for 3 I0511 13:35:58.582065 7 log.go:172] (0xc0018509a0) (0xc001e90460) Create stream I0511 13:35:58.582080 7 log.go:172] (0xc0018509a0) (0xc001e90460) Stream added, broadcasting: 5 I0511 13:35:58.582874 7 log.go:172] (0xc0018509a0) Reply frame received for 5 I0511 13:35:59.651418 7 log.go:172] (0xc0018509a0) Data frame received for 3 I0511 13:35:59.651510 7 log.go:172] (0xc001e903c0) (3) Data frame handling I0511 13:35:59.651538 7 log.go:172] (0xc001e903c0) (3) Data frame sent I0511 13:35:59.651578 7 log.go:172] (0xc0018509a0) Data frame received for 5 I0511 13:35:59.651634 7 log.go:172] (0xc001e90460) (5) Data frame handling I0511 13:35:59.653361 7 log.go:172] (0xc0018509a0) Data frame received for 3 I0511 13:35:59.653385 7 log.go:172] (0xc001e903c0) (3) Data frame handling I0511 13:35:59.654059 7 log.go:172] (0xc0018509a0) Data frame received for 1 I0511 13:35:59.654078 7 log.go:172] (0xc0015b30e0) (1) Data frame handling I0511 13:35:59.654095 7 log.go:172] (0xc0015b30e0) (1) Data frame sent I0511 13:35:59.654119 7 log.go:172] (0xc0018509a0) (0xc0015b30e0) Stream removed, broadcasting: 1 I0511 13:35:59.654134 7 log.go:172] (0xc0018509a0) Go away received I0511 13:35:59.654228 7 log.go:172] (0xc0018509a0) (0xc0015b30e0) Stream removed, broadcasting: 1 I0511 13:35:59.654245 7 log.go:172] (0xc0018509a0) (0xc001e903c0) Stream removed, broadcasting: 3 I0511 13:35:59.654255 7 log.go:172] (0xc0018509a0) (0xc001e90460) Stream removed, broadcasting: 5 May 11 13:35:59.654: INFO: Found all expected endpoints: [netserver-0] May 11 13:35:59.811: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.26 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6475 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 13:35:59.811: INFO: >>> kubeConfig: /root/.kube/config I0511 13:35:59.843849 7 log.go:172] (0xc002064a50) (0xc001893180) Create stream I0511 13:35:59.843869 7 log.go:172] (0xc002064a50) (0xc001893180) Stream added, broadcasting: 1 I0511 13:35:59.845659 7 log.go:172] (0xc002064a50) Reply frame received for 1 I0511 13:35:59.845683 7 log.go:172] (0xc002064a50) (0xc001e90500) Create stream I0511 13:35:59.845691 7 log.go:172] (0xc002064a50) (0xc001e90500) Stream added, broadcasting: 3 I0511 13:35:59.846177 7 log.go:172] (0xc002064a50) Reply frame received for 3 I0511 13:35:59.846196 7 log.go:172] (0xc002064a50) (0xc001893220) Create stream I0511 13:35:59.846204 7 log.go:172] (0xc002064a50) (0xc001893220) Stream added, broadcasting: 5 I0511 13:35:59.846790 7 log.go:172] (0xc002064a50) Reply frame received for 5 I0511 13:36:00.913302 7 log.go:172] (0xc002064a50) Data frame received for 3 I0511 13:36:00.913336 7 log.go:172] (0xc001e90500) (3) Data frame handling I0511 13:36:00.913358 7 log.go:172] (0xc001e90500) (3) Data frame sent I0511 13:36:00.913378 7 log.go:172] (0xc002064a50) Data frame received for 3 I0511 13:36:00.913390 7 log.go:172] (0xc001e90500) (3) Data frame handling I0511 13:36:00.913471 7 log.go:172] (0xc002064a50) Data frame received for 5 I0511 13:36:00.913513 7 log.go:172] (0xc001893220) (5) Data frame handling I0511 13:36:00.915057 7 log.go:172] (0xc002064a50) Data frame received for 1 I0511 13:36:00.915076 7 log.go:172] (0xc001893180) (1) Data frame handling I0511 13:36:00.915087 7 log.go:172] (0xc001893180) (1) Data frame sent I0511 13:36:00.915104 7 log.go:172] (0xc002064a50) (0xc001893180) Stream removed, broadcasting: 1 I0511 13:36:00.915131 7 log.go:172] (0xc002064a50) Go away received I0511 13:36:00.915300 7 log.go:172] (0xc002064a50) (0xc001893180) Stream removed, broadcasting: 1 I0511 13:36:00.915322 7 log.go:172] (0xc002064a50) (0xc001e90500) Stream removed, broadcasting: 3 I0511 13:36:00.915334 7 log.go:172] (0xc002064a50) (0xc001893220) Stream removed, broadcasting: 5 May 11 13:36:00.915: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:36:00.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6475" for this suite. May 11 13:36:27.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:36:27.179: INFO: namespace pod-network-test-6475 deletion completed in 26.259756445s • [SLOW TEST:65.056 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:36:27.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 11 13:36:27.629: INFO: Waiting up to 5m0s for pod "var-expansion-02b30d6c-cc47-45da-ae88-b49877668e00" in namespace "var-expansion-7082" to be "success or failure" May 11 13:36:27.700: INFO: Pod "var-expansion-02b30d6c-cc47-45da-ae88-b49877668e00": Phase="Pending", Reason="", readiness=false. Elapsed: 71.063406ms May 11 13:36:29.968: INFO: Pod "var-expansion-02b30d6c-cc47-45da-ae88-b49877668e00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338867708s May 11 13:36:31.972: INFO: Pod "var-expansion-02b30d6c-cc47-45da-ae88-b49877668e00": Phase="Running", Reason="", readiness=true. Elapsed: 4.342833923s May 11 13:36:33.975: INFO: Pod "var-expansion-02b30d6c-cc47-45da-ae88-b49877668e00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.345981344s STEP: Saw pod success May 11 13:36:33.975: INFO: Pod "var-expansion-02b30d6c-cc47-45da-ae88-b49877668e00" satisfied condition "success or failure" May 11 13:36:33.977: INFO: Trying to get logs from node iruya-worker pod var-expansion-02b30d6c-cc47-45da-ae88-b49877668e00 container dapi-container: STEP: delete the pod May 11 13:36:34.744: INFO: Waiting for pod var-expansion-02b30d6c-cc47-45da-ae88-b49877668e00 to disappear May 11 13:36:34.968: INFO: Pod var-expansion-02b30d6c-cc47-45da-ae88-b49877668e00 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:36:34.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7082" for this suite. May 11 13:36:41.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:36:41.282: INFO: namespace var-expansion-7082 deletion completed in 6.31095322s • [SLOW TEST:14.103 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:36:41.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:36:47.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4534" for this suite. May 11 13:37:34.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:37:34.283: INFO: namespace kubelet-test-4534 deletion completed in 46.287044755s • [SLOW TEST:53.001 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:37:34.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5928.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5928.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5928.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5928.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5928.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5928.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 13:37:44.963: INFO: DNS probes using dns-5928/dns-test-86748219-1dc5-4d44-b822-34c1f5d6bade succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:37:45.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5928" for this suite. May 11 13:37:51.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:37:51.618: INFO: namespace dns-5928 deletion completed in 6.515860375s • [SLOW TEST:17.334 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:37:51.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 11 13:37:51.935: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 13:37:51.948: INFO: Waiting for terminating namespaces to be deleted... May 11 13:37:51.950: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 11 13:37:51.954: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 11 13:37:51.954: INFO: Container kube-proxy ready: true, restart count 0 May 11 13:37:51.954: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 11 13:37:51.954: INFO: Container kindnet-cni ready: true, restart count 0 May 11 13:37:51.954: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 11 13:37:51.958: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 11 13:37:51.958: INFO: Container coredns ready: true, restart count 0 May 11 13:37:51.958: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 11 13:37:51.958: INFO: Container coredns ready: true, restart count 0 May 11 13:37:51.958: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 11 13:37:51.958: INFO: Container kube-proxy ready: true, restart count 0 May 11 13:37:51.958: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 11 13:37:51.958: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ea2f4fa1-8e86-4952-95a6-626fa229aac6 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ea2f4fa1-8e86-4952-95a6-626fa229aac6 off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-ea2f4fa1-8e86-4952-95a6-626fa229aac6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:38:04.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7416" for this suite. May 11 13:38:24.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:38:24.655: INFO: namespace sched-pred-7416 deletion completed in 20.164340929s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:33.037 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:38:24.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 13:38:25.047: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:38:27.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-502" for this suite. May 11 13:38:33.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:38:33.508: INFO: namespace custom-resource-definition-502 deletion completed in 6.178441722s • [SLOW TEST:8.852 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:38:33.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-0ec4d0db-6a7e-421a-8166-9bea4c2025d7 in namespace container-probe-7663 May 11 13:38:39.776: INFO: Started pod busybox-0ec4d0db-6a7e-421a-8166-9bea4c2025d7 in namespace container-probe-7663 STEP: checking the pod's current state and verifying that restartCount is present May 11 13:38:39.782: INFO: Initial restart count of pod busybox-0ec4d0db-6a7e-421a-8166-9bea4c2025d7 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:42:40.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7663" for this suite. May 11 13:42:49.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:42:49.468: INFO: namespace container-probe-7663 deletion completed in 8.513130325s • [SLOW TEST:255.960 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:42:49.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 11 13:42:49.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 11 13:42:56.148: INFO: stderr: "" May 11 13:42:56.148: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:42:56.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1933" for this suite. May 11 13:43:02.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:43:03.030: INFO: namespace kubectl-1933 deletion completed in 6.599874747s • [SLOW TEST:13.561 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:43:03.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 13:43:03.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f63b3bf-2ef2-4ce8-bf88-9f9cd10c0a74" in namespace "projected-6" to be "success or failure" May 11 13:43:03.507: INFO: Pod "downwardapi-volume-8f63b3bf-2ef2-4ce8-bf88-9f9cd10c0a74": Phase="Pending", Reason="", readiness=false. Elapsed: 77.892499ms May 11 13:43:05.511: INFO: Pod "downwardapi-volume-8f63b3bf-2ef2-4ce8-bf88-9f9cd10c0a74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08159629s May 11 13:43:07.515: INFO: Pod "downwardapi-volume-8f63b3bf-2ef2-4ce8-bf88-9f9cd10c0a74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085285519s May 11 13:43:09.590: INFO: Pod "downwardapi-volume-8f63b3bf-2ef2-4ce8-bf88-9f9cd10c0a74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160858431s STEP: Saw pod success May 11 13:43:09.590: INFO: Pod "downwardapi-volume-8f63b3bf-2ef2-4ce8-bf88-9f9cd10c0a74" satisfied condition "success or failure" May 11 13:43:09.638: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8f63b3bf-2ef2-4ce8-bf88-9f9cd10c0a74 container client-container: STEP: delete the pod May 11 13:43:10.028: INFO: Waiting for pod downwardapi-volume-8f63b3bf-2ef2-4ce8-bf88-9f9cd10c0a74 to disappear May 11 13:43:10.031: INFO: Pod downwardapi-volume-8f63b3bf-2ef2-4ce8-bf88-9f9cd10c0a74 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:43:10.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6" for this suite. May 11 13:43:16.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:43:16.623: INFO: namespace projected-6 deletion completed in 6.571748886s • [SLOW TEST:13.593 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:43:16.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-649 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-649 to expose endpoints map[] May 11 13:43:16.876: INFO: Get endpoints failed (29.111926ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 11 13:43:17.880: INFO: successfully validated that service multi-endpoint-test in namespace services-649 exposes endpoints map[] (1.032639048s elapsed) STEP: Creating pod pod1 in namespace services-649 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-649 to expose endpoints map[pod1:[100]] May 11 13:43:22.501: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.614400926s elapsed, will retry) May 11 13:43:24.641: INFO: successfully validated that service multi-endpoint-test in namespace services-649 exposes endpoints map[pod1:[100]] (6.754520871s elapsed) STEP: Creating pod pod2 in namespace services-649 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-649 to expose endpoints map[pod1:[100] pod2:[101]] May 11 13:43:29.577: INFO: successfully validated that service multi-endpoint-test in namespace services-649 exposes endpoints map[pod1:[100] pod2:[101]] (4.932176924s elapsed) STEP: Deleting pod pod1 in namespace services-649 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-649 to expose endpoints map[pod2:[101]] May 11 13:43:30.830: INFO: successfully validated that service multi-endpoint-test in namespace services-649 exposes endpoints map[pod2:[101]] (1.250590302s elapsed) STEP: Deleting pod pod2 in namespace services-649 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-649 to expose endpoints map[] May 11 13:43:31.053: INFO: successfully validated that service multi-endpoint-test in namespace services-649 exposes endpoints map[] (218.593205ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:43:31.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-649" for this suite. May 11 13:43:54.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:43:54.193: INFO: namespace services-649 deletion completed in 22.315510502s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:37.570 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:43:54.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 13:43:54.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e050a724-33ab-4049-8e5e-e33966019703" in namespace "downward-api-6477" to be "success or failure" May 11 13:43:54.423: INFO: Pod "downwardapi-volume-e050a724-33ab-4049-8e5e-e33966019703": Phase="Pending", Reason="", readiness=false. Elapsed: 15.477466ms May 11 13:43:56.427: INFO: Pod "downwardapi-volume-e050a724-33ab-4049-8e5e-e33966019703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019651217s May 11 13:43:58.431: INFO: Pod "downwardapi-volume-e050a724-33ab-4049-8e5e-e33966019703": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023914352s May 11 13:44:00.436: INFO: Pod "downwardapi-volume-e050a724-33ab-4049-8e5e-e33966019703": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028406352s STEP: Saw pod success May 11 13:44:00.436: INFO: Pod "downwardapi-volume-e050a724-33ab-4049-8e5e-e33966019703" satisfied condition "success or failure" May 11 13:44:00.439: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e050a724-33ab-4049-8e5e-e33966019703 container client-container: STEP: delete the pod May 11 13:44:00.498: INFO: Waiting for pod downwardapi-volume-e050a724-33ab-4049-8e5e-e33966019703 to disappear May 11 13:44:00.868: INFO: Pod downwardapi-volume-e050a724-33ab-4049-8e5e-e33966019703 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:44:00.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6477" for this suite. May 11 13:44:07.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:44:07.178: INFO: namespace downward-api-6477 deletion completed in 6.130115179s • [SLOW TEST:12.984 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:44:07.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 13:44:07.617: INFO: Waiting up to 5m0s for pod "pod-3d4ffecf-e55f-4321-bce2-b6f1a87871e6" in namespace "emptydir-9004" to be "success or failure" May 11 13:44:07.796: INFO: Pod "pod-3d4ffecf-e55f-4321-bce2-b6f1a87871e6": Phase="Pending", Reason="", readiness=false. Elapsed: 178.839103ms May 11 13:44:09.799: INFO: Pod "pod-3d4ffecf-e55f-4321-bce2-b6f1a87871e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182434597s May 11 13:44:11.962: INFO: Pod "pod-3d4ffecf-e55f-4321-bce2-b6f1a87871e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34516418s May 11 13:44:13.967: INFO: Pod "pod-3d4ffecf-e55f-4321-bce2-b6f1a87871e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.349728414s STEP: Saw pod success May 11 13:44:13.967: INFO: Pod "pod-3d4ffecf-e55f-4321-bce2-b6f1a87871e6" satisfied condition "success or failure" May 11 13:44:13.970: INFO: Trying to get logs from node iruya-worker2 pod pod-3d4ffecf-e55f-4321-bce2-b6f1a87871e6 container test-container: STEP: delete the pod May 11 13:44:14.180: INFO: Waiting for pod pod-3d4ffecf-e55f-4321-bce2-b6f1a87871e6 to disappear May 11 13:44:14.231: INFO: Pod pod-3d4ffecf-e55f-4321-bce2-b6f1a87871e6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:44:14.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9004" for this suite. May 11 13:44:20.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:44:20.523: INFO: namespace emptydir-9004 deletion completed in 6.287758298s • [SLOW TEST:13.345 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:44:20.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:44:26.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9437" for this suite. May 11 13:45:10.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:45:10.886: INFO: namespace kubelet-test-9437 deletion completed in 44.103768082s • [SLOW TEST:50.363 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:45:10.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 11 13:45:11.162: INFO: Waiting up to 5m0s for pod "client-containers-44a18bd5-71de-42b9-8f03-c0ffc84a4beb" in namespace "containers-8238" to be "success or failure" May 11 13:45:11.187: INFO: Pod "client-containers-44a18bd5-71de-42b9-8f03-c0ffc84a4beb": Phase="Pending", Reason="", readiness=false. Elapsed: 25.685884ms May 11 13:45:13.372: INFO: Pod "client-containers-44a18bd5-71de-42b9-8f03-c0ffc84a4beb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210013333s May 11 13:45:15.504: INFO: Pod "client-containers-44a18bd5-71de-42b9-8f03-c0ffc84a4beb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342276641s May 11 13:45:17.508: INFO: Pod "client-containers-44a18bd5-71de-42b9-8f03-c0ffc84a4beb": Phase="Running", Reason="", readiness=true. Elapsed: 6.345890912s May 11 13:45:19.512: INFO: Pod "client-containers-44a18bd5-71de-42b9-8f03-c0ffc84a4beb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.350192601s STEP: Saw pod success May 11 13:45:19.512: INFO: Pod "client-containers-44a18bd5-71de-42b9-8f03-c0ffc84a4beb" satisfied condition "success or failure" May 11 13:45:19.515: INFO: Trying to get logs from node iruya-worker2 pod client-containers-44a18bd5-71de-42b9-8f03-c0ffc84a4beb container test-container: STEP: delete the pod May 11 13:45:19.569: INFO: Waiting for pod client-containers-44a18bd5-71de-42b9-8f03-c0ffc84a4beb to disappear May 11 13:45:19.706: INFO: Pod client-containers-44a18bd5-71de-42b9-8f03-c0ffc84a4beb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:45:19.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8238" for this suite. May 11 13:45:25.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:45:25.818: INFO: namespace containers-8238 deletion completed in 6.108894625s • [SLOW TEST:14.932 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:45:25.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:45:55.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-791" for this suite. May 11 13:46:01.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:46:01.299: INFO: namespace namespaces-791 deletion completed in 6.232595302s STEP: Destroying namespace "nsdeletetest-1566" for this suite. May 11 13:46:01.301: INFO: Namespace nsdeletetest-1566 was already deleted STEP: Destroying namespace "nsdeletetest-8546" for this suite. May 11 13:46:07.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:46:07.542: INFO: namespace nsdeletetest-8546 deletion completed in 6.240369495s • [SLOW TEST:41.723 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:46:07.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-4b5117b0-cc51-4548-abd2-3e8b707e4bd7 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:46:19.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2512" for this suite. May 11 13:46:45.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:46:45.993: INFO: namespace configmap-2512 deletion completed in 26.109265582s • [SLOW TEST:38.451 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:46:45.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 13:46:46.297: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f629d14-27a6-4468-93e5-a46b9a5d8be5" in namespace "downward-api-8979" to be "success or failure" May 11 13:46:46.380: INFO: Pod "downwardapi-volume-4f629d14-27a6-4468-93e5-a46b9a5d8be5": Phase="Pending", Reason="", readiness=false. Elapsed: 82.926873ms May 11 13:46:48.384: INFO: Pod "downwardapi-volume-4f629d14-27a6-4468-93e5-a46b9a5d8be5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087252469s May 11 13:46:50.451: INFO: Pod "downwardapi-volume-4f629d14-27a6-4468-93e5-a46b9a5d8be5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153577138s May 11 13:46:52.454: INFO: Pod "downwardapi-volume-4f629d14-27a6-4468-93e5-a46b9a5d8be5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.156763538s STEP: Saw pod success May 11 13:46:52.454: INFO: Pod "downwardapi-volume-4f629d14-27a6-4468-93e5-a46b9a5d8be5" satisfied condition "success or failure" May 11 13:46:52.456: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4f629d14-27a6-4468-93e5-a46b9a5d8be5 container client-container: STEP: delete the pod May 11 13:46:52.561: INFO: Waiting for pod downwardapi-volume-4f629d14-27a6-4468-93e5-a46b9a5d8be5 to disappear May 11 13:46:52.630: INFO: Pod downwardapi-volume-4f629d14-27a6-4468-93e5-a46b9a5d8be5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:46:52.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8979" for this suite. May 11 13:46:58.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:46:58.793: INFO: namespace downward-api-8979 deletion completed in 6.158813483s • [SLOW TEST:12.799 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:46:58.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a4037099-8c55-450b-a0f1-aa3351dfda72 STEP: Creating a pod to test consume configMaps May 11 13:46:59.014: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-affeba32-bcf3-43e4-95d3-7ca89fca2049" in namespace "projected-7592" to be "success or failure" May 11 13:46:59.018: INFO: Pod "pod-projected-configmaps-affeba32-bcf3-43e4-95d3-7ca89fca2049": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34379ms May 11 13:47:01.037: INFO: Pod "pod-projected-configmaps-affeba32-bcf3-43e4-95d3-7ca89fca2049": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023276862s May 11 13:47:03.044: INFO: Pod "pod-projected-configmaps-affeba32-bcf3-43e4-95d3-7ca89fca2049": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030094188s May 11 13:47:05.048: INFO: Pod "pod-projected-configmaps-affeba32-bcf3-43e4-95d3-7ca89fca2049": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033760773s STEP: Saw pod success May 11 13:47:05.048: INFO: Pod "pod-projected-configmaps-affeba32-bcf3-43e4-95d3-7ca89fca2049" satisfied condition "success or failure" May 11 13:47:05.050: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-affeba32-bcf3-43e4-95d3-7ca89fca2049 container projected-configmap-volume-test: STEP: delete the pod May 11 13:47:05.317: INFO: Waiting for pod pod-projected-configmaps-affeba32-bcf3-43e4-95d3-7ca89fca2049 to disappear May 11 13:47:05.432: INFO: Pod pod-projected-configmaps-affeba32-bcf3-43e4-95d3-7ca89fca2049 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:47:05.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7592" for this suite. May 11 13:47:11.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:47:11.559: INFO: namespace projected-7592 deletion completed in 6.123965192s • [SLOW TEST:12.767 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:47:11.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9807.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9807.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9807.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9807.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 13:47:24.252: INFO: DNS probes using dns-test-9dae48ea-d026-4f77-b575-8ab8b621124a succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9807.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9807.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9807.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9807.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 13:47:35.649: INFO: File wheezy_udp@dns-test-service-3.dns-9807.svc.cluster.local from pod dns-9807/dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 13:47:35.652: INFO: File jessie_udp@dns-test-service-3.dns-9807.svc.cluster.local from pod dns-9807/dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 13:47:35.652: INFO: Lookups using dns-9807/dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 failed for: [wheezy_udp@dns-test-service-3.dns-9807.svc.cluster.local jessie_udp@dns-test-service-3.dns-9807.svc.cluster.local] May 11 13:47:40.656: INFO: File wheezy_udp@dns-test-service-3.dns-9807.svc.cluster.local from pod dns-9807/dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 13:47:40.659: INFO: File jessie_udp@dns-test-service-3.dns-9807.svc.cluster.local from pod dns-9807/dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 13:47:40.659: INFO: Lookups using dns-9807/dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 failed for: [wheezy_udp@dns-test-service-3.dns-9807.svc.cluster.local jessie_udp@dns-test-service-3.dns-9807.svc.cluster.local] May 11 13:47:45.657: INFO: File wheezy_udp@dns-test-service-3.dns-9807.svc.cluster.local from pod dns-9807/dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 13:47:45.660: INFO: File jessie_udp@dns-test-service-3.dns-9807.svc.cluster.local from pod dns-9807/dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 13:47:45.660: INFO: Lookups using dns-9807/dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 failed for: [wheezy_udp@dns-test-service-3.dns-9807.svc.cluster.local jessie_udp@dns-test-service-3.dns-9807.svc.cluster.local] May 11 13:47:50.655: INFO: File wheezy_udp@dns-test-service-3.dns-9807.svc.cluster.local from pod dns-9807/dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 13:47:50.657: INFO: File jessie_udp@dns-test-service-3.dns-9807.svc.cluster.local from pod dns-9807/dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 13:47:50.657: INFO: Lookups using dns-9807/dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 failed for: [wheezy_udp@dns-test-service-3.dns-9807.svc.cluster.local jessie_udp@dns-test-service-3.dns-9807.svc.cluster.local] May 11 13:47:55.658: INFO: DNS probes using dns-test-c7f578d4-bf4d-43c1-8208-636a26b72ac7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9807.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9807.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9807.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9807.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 13:48:06.978: INFO: DNS probes using dns-test-02ef5991-031e-47e9-aae6-596e641dff6f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:48:08.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9807" for this suite. May 11 13:48:16.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:48:16.549: INFO: namespace dns-9807 deletion completed in 8.16304227s • [SLOW TEST:64.989 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:48:16.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 11 13:48:16.818: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:48:29.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8469" for this suite. May 11 13:48:38.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:48:38.186: INFO: namespace init-container-8469 deletion completed in 8.254564079s • [SLOW TEST:21.637 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:48:38.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6609.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6609.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 13:48:49.162: INFO: DNS probes using dns-6609/dns-test-5f95ff06-08d6-4e08-89d3-f2353fa64697 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:48:49.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6609" for this suite. May 11 13:48:57.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:48:57.522: INFO: namespace dns-6609 deletion completed in 8.210788769s • [SLOW TEST:19.335 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:48:57.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 11 13:48:57.929: INFO: Waiting up to 5m0s for pod "downward-api-a9c4a6d9-5d0a-4227-841a-fb650418b886" in namespace "downward-api-9801" to be "success or failure" May 11 13:48:58.015: INFO: Pod "downward-api-a9c4a6d9-5d0a-4227-841a-fb650418b886": Phase="Pending", Reason="", readiness=false. Elapsed: 86.397948ms May 11 13:49:00.651: INFO: Pod "downward-api-a9c4a6d9-5d0a-4227-841a-fb650418b886": Phase="Pending", Reason="", readiness=false. Elapsed: 2.721853513s May 11 13:49:02.655: INFO: Pod "downward-api-a9c4a6d9-5d0a-4227-841a-fb650418b886": Phase="Pending", Reason="", readiness=false. Elapsed: 4.726033374s May 11 13:49:04.659: INFO: Pod "downward-api-a9c4a6d9-5d0a-4227-841a-fb650418b886": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.73016788s STEP: Saw pod success May 11 13:49:04.659: INFO: Pod "downward-api-a9c4a6d9-5d0a-4227-841a-fb650418b886" satisfied condition "success or failure" May 11 13:49:04.662: INFO: Trying to get logs from node iruya-worker2 pod downward-api-a9c4a6d9-5d0a-4227-841a-fb650418b886 container dapi-container: STEP: delete the pod May 11 13:49:04.729: INFO: Waiting for pod downward-api-a9c4a6d9-5d0a-4227-841a-fb650418b886 to disappear May 11 13:49:05.046: INFO: Pod downward-api-a9c4a6d9-5d0a-4227-841a-fb650418b886 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:49:05.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9801" for this suite. May 11 13:49:11.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:49:11.275: INFO: namespace downward-api-9801 deletion completed in 6.22512145s • [SLOW TEST:13.753 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:49:11.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 13:49:20.301: INFO: Successfully updated pod "pod-update-eb30389d-4191-45c2-b19a-2f48776c9890" STEP: verifying the updated pod is in kubernetes May 11 13:49:20.555: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:49:20.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8687" for this suite. May 11 13:49:42.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:49:42.802: INFO: namespace pods-8687 deletion completed in 22.244320362s • [SLOW TEST:31.527 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:49:42.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-01356907-906e-40a2-85e7-68b344677d2b STEP: Creating a pod to test consume secrets May 11 13:49:43.589: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dac2b1dc-56b6-4620-b119-5f0f27ead16c" in namespace "projected-3239" to be "success or failure" May 11 13:49:43.616: INFO: Pod "pod-projected-secrets-dac2b1dc-56b6-4620-b119-5f0f27ead16c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.248896ms May 11 13:49:45.675: INFO: Pod "pod-projected-secrets-dac2b1dc-56b6-4620-b119-5f0f27ead16c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085867849s May 11 13:49:47.783: INFO: Pod "pod-projected-secrets-dac2b1dc-56b6-4620-b119-5f0f27ead16c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193987562s May 11 13:49:49.818: INFO: Pod "pod-projected-secrets-dac2b1dc-56b6-4620-b119-5f0f27ead16c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.229161022s May 11 13:49:51.824: INFO: Pod "pod-projected-secrets-dac2b1dc-56b6-4620-b119-5f0f27ead16c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.234383601s STEP: Saw pod success May 11 13:49:51.824: INFO: Pod "pod-projected-secrets-dac2b1dc-56b6-4620-b119-5f0f27ead16c" satisfied condition "success or failure" May 11 13:49:51.827: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-dac2b1dc-56b6-4620-b119-5f0f27ead16c container projected-secret-volume-test: STEP: delete the pod May 11 13:49:52.051: INFO: Waiting for pod pod-projected-secrets-dac2b1dc-56b6-4620-b119-5f0f27ead16c to disappear May 11 13:49:52.118: INFO: Pod pod-projected-secrets-dac2b1dc-56b6-4620-b119-5f0f27ead16c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:49:52.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3239" for this suite. May 11 13:50:00.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:50:00.355: INFO: namespace projected-3239 deletion completed in 8.232295853s • [SLOW TEST:17.552 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:50:00.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:50:00.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1782" for this suite. May 11 13:50:06.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:50:06.706: INFO: namespace services-1782 deletion completed in 6.083154894s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.350 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:50:06.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-334 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-334 to expose endpoints map[] May 11 13:50:06.915: INFO: Get endpoints failed (19.294053ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 11 13:50:07.939: INFO: successfully validated that service endpoint-test2 in namespace services-334 exposes endpoints map[] (1.043267823s elapsed) STEP: Creating pod pod1 in namespace services-334 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-334 to expose endpoints map[pod1:[80]] May 11 13:50:12.361: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.41663106s elapsed, will retry) May 11 13:50:13.366: INFO: successfully validated that service endpoint-test2 in namespace services-334 exposes endpoints map[pod1:[80]] (5.421573216s elapsed) STEP: Creating pod pod2 in namespace services-334 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-334 to expose endpoints map[pod1:[80] pod2:[80]] May 11 13:50:17.846: INFO: Unexpected endpoints: found map[6ac6591f-36b9-4735-8882-489b530ed8e6:[80]], expected map[pod1:[80] pod2:[80]] (4.477663241s elapsed, will retry) May 11 13:50:18.880: INFO: successfully validated that service endpoint-test2 in namespace services-334 exposes endpoints map[pod1:[80] pod2:[80]] (5.511326628s elapsed) STEP: Deleting pod pod1 in namespace services-334 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-334 to expose endpoints map[pod2:[80]] May 11 13:50:20.242: INFO: successfully validated that service endpoint-test2 in namespace services-334 exposes endpoints map[pod2:[80]] (1.356492614s elapsed) STEP: Deleting pod pod2 in namespace services-334 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-334 to expose endpoints map[] May 11 13:50:21.415: INFO: successfully validated that service endpoint-test2 in namespace services-334 exposes endpoints map[] (1.167282302s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:50:22.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-334" for this suite. May 11 13:50:44.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:50:44.362: INFO: namespace services-334 deletion completed in 22.289875306s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:37.656 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:50:44.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 13:50:44.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5236' May 11 13:50:44.678: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 13:50:44.678: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 11 13:50:44.704: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 11 13:50:44.802: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 11 13:50:44.817: INFO: scanned /root for discovery docs: May 11 13:50:44.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5236' May 11 13:51:02.192: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 11 13:51:02.192: INFO: stdout: "Created e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de\nScaling up e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 11 13:51:02.192: INFO: stdout: "Created e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de\nScaling up e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 11 13:51:02.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:51:02.338: INFO: stderr: "" May 11 13:51:02.338: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk e2e-test-nginx-rc-bc7q4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 11 13:51:07.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:51:07.449: INFO: stderr: "" May 11 13:51:07.449: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk e2e-test-nginx-rc-bc7q4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 11 13:51:12.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:51:12.646: INFO: stderr: "" May 11 13:51:12.646: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk e2e-test-nginx-rc-bc7q4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 11 13:51:17.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:51:17.753: INFO: stderr: "" May 11 13:51:17.753: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk e2e-test-nginx-rc-bc7q4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 11 13:51:22.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:51:22.845: INFO: stderr: "" May 11 13:51:22.845: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk e2e-test-nginx-rc-bc7q4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 11 13:51:27.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:51:27.939: INFO: stderr: "" May 11 13:51:27.939: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk e2e-test-nginx-rc-bc7q4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 11 13:51:32.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:51:33.026: INFO: stderr: "" May 11 13:51:33.026: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk e2e-test-nginx-rc-bc7q4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 11 13:51:38.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:51:38.133: INFO: stderr: "" May 11 13:51:38.133: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk e2e-test-nginx-rc-bc7q4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 11 13:51:43.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:51:43.317: INFO: stderr: "" May 11 13:51:43.317: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk e2e-test-nginx-rc-bc7q4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 11 13:51:48.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:51:48.397: INFO: stderr: "" May 11 13:51:48.397: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk e2e-test-nginx-rc-bc7q4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 11 13:51:53.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:51:53.486: INFO: stderr: "" May 11 13:51:53.487: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk e2e-test-nginx-rc-bc7q4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 11 13:51:58.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:51:58.580: INFO: stderr: "" May 11 13:51:58.581: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk e2e-test-nginx-rc-bc7q4 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 11 13:52:03.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:52:03.697: INFO: stderr: "" May 11 13:52:03.697: INFO: stdout: "e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk " May 11 13:52:03.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5236' May 11 13:52:03.851: INFO: stderr: "" May 11 13:52:03.851: INFO: stdout: "true" May 11 13:52:03.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5236' May 11 13:52:03.940: INFO: stderr: "" May 11 13:52:03.940: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 11 13:52:03.940: INFO: e2e-test-nginx-rc-495d7cb3e4ae22cc2ce4d831ff7800de-7xqpk is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 11 13:52:03.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5236' May 11 13:52:04.248: INFO: stderr: "" May 11 13:52:04.248: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:52:04.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5236" for this suite. May 11 13:52:12.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:52:12.682: INFO: namespace kubectl-5236 deletion completed in 8.373429397s • [SLOW TEST:88.320 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:52:12.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-1237 I0511 13:52:13.067259 7 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1237, replica count: 1 I0511 13:52:14.117702 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 13:52:15.117864 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 13:52:16.118082 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 13:52:17.118389 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 13:52:18.119100 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 13:52:19.119352 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 13:52:20.119602 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 13:52:21.119815 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 13:52:22.120009 7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 13:52:22.359: INFO: Created: latency-svc-c6wjv May 11 13:52:22.510: INFO: Got endpoints: latency-svc-c6wjv [290.362354ms] May 11 13:52:22.612: INFO: Created: latency-svc-z49fn May 11 13:52:22.689: INFO: Got endpoints: latency-svc-z49fn [178.393107ms] May 11 13:52:22.693: INFO: Created: latency-svc-4w6nh May 11 13:52:22.723: INFO: Got endpoints: latency-svc-4w6nh [212.404998ms] May 11 13:52:22.784: INFO: Created: latency-svc-dcwbl May 11 13:52:22.863: INFO: Got endpoints: latency-svc-dcwbl [352.030979ms] May 11 13:52:22.870: INFO: Created: latency-svc-fngqv May 11 13:52:22.927: INFO: Got endpoints: latency-svc-fngqv [416.14063ms] May 11 13:52:23.006: INFO: Created: latency-svc-h9lf5 May 11 13:52:23.023: INFO: Got endpoints: latency-svc-h9lf5 [512.066289ms] May 11 13:52:23.079: INFO: Created: latency-svc-d8ngg May 11 13:52:23.105: INFO: Got endpoints: latency-svc-d8ngg [594.177541ms] May 11 13:52:23.205: INFO: Created: latency-svc-mr6rc May 11 13:52:23.208: INFO: Got endpoints: latency-svc-mr6rc [697.186934ms] May 11 13:52:23.378: INFO: Created: latency-svc-d4rlr May 11 13:52:23.390: INFO: Got endpoints: latency-svc-d4rlr [878.875696ms] May 11 13:52:23.466: INFO: Created: latency-svc-xnbsn May 11 13:52:23.540: INFO: Got endpoints: latency-svc-xnbsn [1.029158944s] May 11 13:52:23.574: INFO: Created: latency-svc-lg2rv May 11 13:52:23.601: INFO: Got endpoints: latency-svc-lg2rv [1.089846731s] May 11 13:52:23.696: INFO: Created: latency-svc-jlqdj May 11 13:52:23.703: INFO: Got endpoints: latency-svc-jlqdj [1.192488717s] May 11 13:52:23.762: INFO: Created: latency-svc-tfxjn May 11 13:52:23.788: INFO: Got endpoints: latency-svc-tfxjn [1.276508339s] May 11 13:52:23.887: INFO: Created: latency-svc-6bqrj May 11 13:52:23.894: INFO: Got endpoints: latency-svc-6bqrj [1.38338042s] May 11 13:52:24.080: INFO: Created: latency-svc-8j84q May 11 13:52:24.118: INFO: Got endpoints: latency-svc-8j84q [1.606980748s] May 11 13:52:24.154: INFO: Created: latency-svc-vvxq8 May 11 13:52:24.258: INFO: Got endpoints: latency-svc-vvxq8 [1.74668316s] May 11 13:52:24.260: INFO: Created: latency-svc-mhnqm May 11 13:52:24.296: INFO: Got endpoints: latency-svc-mhnqm [1.606895069s] May 11 13:52:24.441: INFO: Created: latency-svc-9l47r May 11 13:52:24.479: INFO: Got endpoints: latency-svc-9l47r [1.756331547s] May 11 13:52:24.521: INFO: Created: latency-svc-zj9xm May 11 13:52:24.635: INFO: Got endpoints: latency-svc-zj9xm [1.772460807s] May 11 13:52:24.654: INFO: Created: latency-svc-5pk56 May 11 13:52:24.684: INFO: Got endpoints: latency-svc-5pk56 [1.756776334s] May 11 13:52:24.727: INFO: Created: latency-svc-vprcb May 11 13:52:24.786: INFO: Got endpoints: latency-svc-vprcb [1.762569026s] May 11 13:52:24.844: INFO: Created: latency-svc-r9gj8 May 11 13:52:24.858: INFO: Got endpoints: latency-svc-r9gj8 [1.753318255s] May 11 13:52:24.977: INFO: Created: latency-svc-dwwcb May 11 13:52:25.038: INFO: Got endpoints: latency-svc-dwwcb [1.830054774s] May 11 13:52:25.217: INFO: Created: latency-svc-p5f7g May 11 13:52:25.220: INFO: Got endpoints: latency-svc-p5f7g [1.829867399s] May 11 13:52:25.414: INFO: Created: latency-svc-2wghz May 11 13:52:25.441: INFO: Got endpoints: latency-svc-2wghz [1.900468309s] May 11 13:52:25.477: INFO: Created: latency-svc-7xk59 May 11 13:52:25.581: INFO: Got endpoints: latency-svc-7xk59 [1.980585053s] May 11 13:52:25.585: INFO: Created: latency-svc-xdxjg May 11 13:52:25.622: INFO: Got endpoints: latency-svc-xdxjg [1.918084536s] May 11 13:52:25.666: INFO: Created: latency-svc-5kt4z May 11 13:52:25.780: INFO: Got endpoints: latency-svc-5kt4z [1.992284862s] May 11 13:52:25.782: INFO: Created: latency-svc-pw4x7 May 11 13:52:25.839: INFO: Got endpoints: latency-svc-pw4x7 [1.944533163s] May 11 13:52:25.983: INFO: Created: latency-svc-bz5mp May 11 13:52:25.987: INFO: Got endpoints: latency-svc-bz5mp [1.868866873s] May 11 13:52:26.201: INFO: Created: latency-svc-492rh May 11 13:52:26.204: INFO: Got endpoints: latency-svc-492rh [1.946102276s] May 11 13:52:26.247: INFO: Created: latency-svc-w5n47 May 11 13:52:26.271: INFO: Got endpoints: latency-svc-w5n47 [1.975511589s] May 11 13:52:26.360: INFO: Created: latency-svc-pnwwz May 11 13:52:26.363: INFO: Got endpoints: latency-svc-pnwwz [1.883567791s] May 11 13:52:26.431: INFO: Created: latency-svc-gt6k7 May 11 13:52:26.522: INFO: Got endpoints: latency-svc-gt6k7 [1.886460721s] May 11 13:52:26.537: INFO: Created: latency-svc-hvh66 May 11 13:52:26.548: INFO: Got endpoints: latency-svc-hvh66 [1.863908069s] May 11 13:52:26.605: INFO: Created: latency-svc-qrjnp May 11 13:52:26.660: INFO: Got endpoints: latency-svc-qrjnp [1.87390087s] May 11 13:52:26.683: INFO: Created: latency-svc-724sl May 11 13:52:26.705: INFO: Got endpoints: latency-svc-724sl [1.846225899s] May 11 13:52:26.816: INFO: Created: latency-svc-mcr9f May 11 13:52:26.820: INFO: Got endpoints: latency-svc-mcr9f [1.781423023s] May 11 13:52:26.901: INFO: Created: latency-svc-mqvg5 May 11 13:52:27.000: INFO: Got endpoints: latency-svc-mqvg5 [1.78067851s] May 11 13:52:27.016: INFO: Created: latency-svc-bnwkq May 11 13:52:27.049: INFO: Got endpoints: latency-svc-bnwkq [1.608127444s] May 11 13:52:27.218: INFO: Created: latency-svc-v4rrk May 11 13:52:27.223: INFO: Got endpoints: latency-svc-v4rrk [1.641914152s] May 11 13:52:27.393: INFO: Created: latency-svc-649xj May 11 13:52:27.395: INFO: Got endpoints: latency-svc-649xj [1.773339452s] May 11 13:52:27.435: INFO: Created: latency-svc-ngx6q May 11 13:52:27.600: INFO: Got endpoints: latency-svc-ngx6q [1.819792371s] May 11 13:52:27.622: INFO: Created: latency-svc-pwd8g May 11 13:52:27.672: INFO: Got endpoints: latency-svc-pwd8g [1.833404577s] May 11 13:52:27.786: INFO: Created: latency-svc-4zlzv May 11 13:52:27.817: INFO: Got endpoints: latency-svc-4zlzv [1.830020788s] May 11 13:52:27.862: INFO: Created: latency-svc-wkf87 May 11 13:52:28.025: INFO: Got endpoints: latency-svc-wkf87 [1.821059211s] May 11 13:52:28.028: INFO: Created: latency-svc-jf5pl May 11 13:52:28.057: INFO: Got endpoints: latency-svc-jf5pl [1.785983847s] May 11 13:52:28.082: INFO: Created: latency-svc-qhnc7 May 11 13:52:28.106: INFO: Got endpoints: latency-svc-qhnc7 [1.743389736s] May 11 13:52:28.168: INFO: Created: latency-svc-xpr7m May 11 13:52:28.184: INFO: Got endpoints: latency-svc-xpr7m [1.662257451s] May 11 13:52:28.218: INFO: Created: latency-svc-r5mf5 May 11 13:52:28.245: INFO: Got endpoints: latency-svc-r5mf5 [1.69671399s] May 11 13:52:28.361: INFO: Created: latency-svc-8jzs7 May 11 13:52:28.364: INFO: Got endpoints: latency-svc-8jzs7 [1.704149521s] May 11 13:52:28.460: INFO: Created: latency-svc-k89q4 May 11 13:52:28.611: INFO: Got endpoints: latency-svc-k89q4 [1.906520979s] May 11 13:52:28.643: INFO: Created: latency-svc-vk2gw May 11 13:52:28.712: INFO: Got endpoints: latency-svc-vk2gw [1.892579467s] May 11 13:52:28.810: INFO: Created: latency-svc-wszvv May 11 13:52:28.864: INFO: Got endpoints: latency-svc-wszvv [1.863429971s] May 11 13:52:28.983: INFO: Created: latency-svc-xn7cf May 11 13:52:28.986: INFO: Got endpoints: latency-svc-xn7cf [273.177152ms] May 11 13:52:29.072: INFO: Created: latency-svc-2zgkp May 11 13:52:29.156: INFO: Got endpoints: latency-svc-2zgkp [2.107106486s] May 11 13:52:29.238: INFO: Created: latency-svc-2qbqd May 11 13:52:29.330: INFO: Got endpoints: latency-svc-2qbqd [2.10649614s] May 11 13:52:29.332: INFO: Created: latency-svc-j4g6z May 11 13:52:29.344: INFO: Got endpoints: latency-svc-j4g6z [1.949177143s] May 11 13:52:29.388: INFO: Created: latency-svc-h8mj9 May 11 13:52:29.406: INFO: Got endpoints: latency-svc-h8mj9 [1.805788957s] May 11 13:52:29.510: INFO: Created: latency-svc-h5ktr May 11 13:52:29.519: INFO: Got endpoints: latency-svc-h5ktr [1.846814998s] May 11 13:52:29.605: INFO: Created: latency-svc-b2jbq May 11 13:52:29.665: INFO: Got endpoints: latency-svc-b2jbq [1.84784244s] May 11 13:52:29.703: INFO: Created: latency-svc-tchbz May 11 13:52:29.721: INFO: Got endpoints: latency-svc-tchbz [1.696334613s] May 11 13:52:29.758: INFO: Created: latency-svc-s9fzn May 11 13:52:29.815: INFO: Got endpoints: latency-svc-s9fzn [1.757023381s] May 11 13:52:29.847: INFO: Created: latency-svc-l4r44 May 11 13:52:29.878: INFO: Got endpoints: latency-svc-l4r44 [1.771639195s] May 11 13:52:30.007: INFO: Created: latency-svc-w95pt May 11 13:52:30.009: INFO: Got endpoints: latency-svc-w95pt [1.824695577s] May 11 13:52:30.042: INFO: Created: latency-svc-gs8vg May 11 13:52:30.065: INFO: Got endpoints: latency-svc-gs8vg [1.819888968s] May 11 13:52:30.157: INFO: Created: latency-svc-rrjlf May 11 13:52:30.159: INFO: Got endpoints: latency-svc-rrjlf [1.794990409s] May 11 13:52:30.238: INFO: Created: latency-svc-wql7x May 11 13:52:30.288: INFO: Got endpoints: latency-svc-wql7x [1.676217155s] May 11 13:52:30.311: INFO: Created: latency-svc-sbml2 May 11 13:52:30.341: INFO: Got endpoints: latency-svc-sbml2 [1.477456885s] May 11 13:52:30.371: INFO: Created: latency-svc-wlbxg May 11 13:52:30.461: INFO: Got endpoints: latency-svc-wlbxg [1.475727064s] May 11 13:52:30.464: INFO: Created: latency-svc-474tf May 11 13:52:30.504: INFO: Got endpoints: latency-svc-474tf [1.348128337s] May 11 13:52:30.641: INFO: Created: latency-svc-srjmw May 11 13:52:30.667: INFO: Got endpoints: latency-svc-srjmw [1.336531623s] May 11 13:52:30.722: INFO: Created: latency-svc-x5f2d May 11 13:52:30.839: INFO: Got endpoints: latency-svc-x5f2d [1.494792483s] May 11 13:52:30.879: INFO: Created: latency-svc-chqgt May 11 13:52:30.919: INFO: Got endpoints: latency-svc-chqgt [1.513206612s] May 11 13:52:31.054: INFO: Created: latency-svc-vx66q May 11 13:52:31.087: INFO: Got endpoints: latency-svc-vx66q [1.567252019s] May 11 13:52:31.150: INFO: Created: latency-svc-m47xd May 11 13:52:31.312: INFO: Got endpoints: latency-svc-m47xd [1.646634533s] May 11 13:52:31.314: INFO: Created: latency-svc-n8wbq May 11 13:52:31.376: INFO: Got endpoints: latency-svc-n8wbq [1.654555925s] May 11 13:52:31.576: INFO: Created: latency-svc-mwz86 May 11 13:52:31.580: INFO: Got endpoints: latency-svc-mwz86 [1.765671073s] May 11 13:52:31.827: INFO: Created: latency-svc-kscx6 May 11 13:52:32.115: INFO: Got endpoints: latency-svc-kscx6 [2.236999719s] May 11 13:52:32.118: INFO: Created: latency-svc-67r9n May 11 13:52:32.157: INFO: Got endpoints: latency-svc-67r9n [2.148494013s] May 11 13:52:32.348: INFO: Created: latency-svc-8chp8 May 11 13:52:32.359: INFO: Got endpoints: latency-svc-8chp8 [2.29439962s] May 11 13:52:32.423: INFO: Created: latency-svc-4lss8 May 11 13:52:32.602: INFO: Got endpoints: latency-svc-4lss8 [2.442742426s] May 11 13:52:32.607: INFO: Created: latency-svc-9n98r May 11 13:52:32.678: INFO: Got endpoints: latency-svc-9n98r [2.390564165s] May 11 13:52:32.882: INFO: Created: latency-svc-gffgj May 11 13:52:32.918: INFO: Got endpoints: latency-svc-gffgj [2.576486222s] May 11 13:52:33.079: INFO: Created: latency-svc-m6lwk May 11 13:52:33.086: INFO: Got endpoints: latency-svc-m6lwk [2.624707449s] May 11 13:52:33.120: INFO: Created: latency-svc-x7289 May 11 13:52:33.159: INFO: Got endpoints: latency-svc-x7289 [2.65469829s] May 11 13:52:33.264: INFO: Created: latency-svc-2mnhw May 11 13:52:33.267: INFO: Got endpoints: latency-svc-2mnhw [2.599837309s] May 11 13:52:33.339: INFO: Created: latency-svc-78zss May 11 13:52:33.438: INFO: Got endpoints: latency-svc-78zss [2.598766105s] May 11 13:52:33.489: INFO: Created: latency-svc-8mtsx May 11 13:52:33.514: INFO: Got endpoints: latency-svc-8mtsx [2.594966303s] May 11 13:52:33.582: INFO: Created: latency-svc-brpsd May 11 13:52:33.591: INFO: Got endpoints: latency-svc-brpsd [2.504812223s] May 11 13:52:33.640: INFO: Created: latency-svc-ttw9n May 11 13:52:33.737: INFO: Got endpoints: latency-svc-ttw9n [2.425458369s] May 11 13:52:33.753: INFO: Created: latency-svc-zdh68 May 11 13:52:33.778: INFO: Got endpoints: latency-svc-zdh68 [2.402050443s] May 11 13:52:33.918: INFO: Created: latency-svc-s872j May 11 13:52:33.948: INFO: Got endpoints: latency-svc-s872j [2.367930793s] May 11 13:52:34.077: INFO: Created: latency-svc-qmhvk May 11 13:52:34.127: INFO: Got endpoints: latency-svc-qmhvk [2.012212536s] May 11 13:52:34.271: INFO: Created: latency-svc-8h2fp May 11 13:52:34.273: INFO: Got endpoints: latency-svc-8h2fp [2.116064275s] May 11 13:52:34.352: INFO: Created: latency-svc-fdxsv May 11 13:52:34.450: INFO: Got endpoints: latency-svc-fdxsv [2.091061004s] May 11 13:52:34.451: INFO: Created: latency-svc-8bbjf May 11 13:52:34.481: INFO: Got endpoints: latency-svc-8bbjf [1.879439423s] May 11 13:52:34.539: INFO: Created: latency-svc-b5bk6 May 11 13:52:34.629: INFO: Got endpoints: latency-svc-b5bk6 [1.95108329s] May 11 13:52:34.660: INFO: Created: latency-svc-rf77j May 11 13:52:34.674: INFO: Got endpoints: latency-svc-rf77j [1.755814833s] May 11 13:52:34.711: INFO: Created: latency-svc-jbtg9 May 11 13:52:34.797: INFO: Got endpoints: latency-svc-jbtg9 [1.710724717s] May 11 13:52:34.810: INFO: Created: latency-svc-fcjcp May 11 13:52:34.848: INFO: Got endpoints: latency-svc-fcjcp [1.689089292s] May 11 13:52:34.996: INFO: Created: latency-svc-gq2ml May 11 13:52:35.023: INFO: Got endpoints: latency-svc-gq2ml [1.756089712s] May 11 13:52:35.248: INFO: Created: latency-svc-kj45l May 11 13:52:35.252: INFO: Got endpoints: latency-svc-kj45l [1.813844704s] May 11 13:52:35.328: INFO: Created: latency-svc-b94dl May 11 13:52:35.438: INFO: Got endpoints: latency-svc-b94dl [1.923771144s] May 11 13:52:35.512: INFO: Created: latency-svc-hgklv May 11 13:52:35.606: INFO: Got endpoints: latency-svc-hgklv [2.014258284s] May 11 13:52:35.623: INFO: Created: latency-svc-lhsbd May 11 13:52:35.660: INFO: Got endpoints: latency-svc-lhsbd [1.922602198s] May 11 13:52:35.815: INFO: Created: latency-svc-kg2q4 May 11 13:52:35.864: INFO: Got endpoints: latency-svc-kg2q4 [2.085672622s] May 11 13:52:36.006: INFO: Created: latency-svc-vq5vb May 11 13:52:36.065: INFO: Got endpoints: latency-svc-vq5vb [2.117076289s] May 11 13:52:36.139: INFO: Created: latency-svc-nlzxr May 11 13:52:36.150: INFO: Got endpoints: latency-svc-nlzxr [2.0222548s] May 11 13:52:36.193: INFO: Created: latency-svc-mxg57 May 11 13:52:36.210: INFO: Got endpoints: latency-svc-mxg57 [1.936602196s] May 11 13:52:36.277: INFO: Created: latency-svc-jg49s May 11 13:52:36.278: INFO: Got endpoints: latency-svc-jg49s [1.827140602s] May 11 13:52:36.426: INFO: Created: latency-svc-b2q5s May 11 13:52:36.429: INFO: Got endpoints: latency-svc-b2q5s [1.947822211s] May 11 13:52:36.506: INFO: Created: latency-svc-86r62 May 11 13:52:36.511: INFO: Got endpoints: latency-svc-86r62 [1.881422209s] May 11 13:52:36.600: INFO: Created: latency-svc-xn9f7 May 11 13:52:36.641: INFO: Got endpoints: latency-svc-xn9f7 [1.966799801s] May 11 13:52:36.641: INFO: Created: latency-svc-ccc4f May 11 13:52:36.668: INFO: Got endpoints: latency-svc-ccc4f [1.87089493s] May 11 13:52:36.798: INFO: Created: latency-svc-vcw8g May 11 13:52:36.801: INFO: Got endpoints: latency-svc-vcw8g [1.953272993s] May 11 13:52:36.868: INFO: Created: latency-svc-qr9cs May 11 13:52:36.894: INFO: Got endpoints: latency-svc-qr9cs [1.871241892s] May 11 13:52:36.983: INFO: Created: latency-svc-t7m45 May 11 13:52:36.986: INFO: Got endpoints: latency-svc-t7m45 [1.734161569s] May 11 13:52:37.076: INFO: Created: latency-svc-ch5xd May 11 13:52:37.174: INFO: Got endpoints: latency-svc-ch5xd [1.736424509s] May 11 13:52:37.201: INFO: Created: latency-svc-zpn6b May 11 13:52:37.231: INFO: Got endpoints: latency-svc-zpn6b [1.62484657s] May 11 13:52:37.331: INFO: Created: latency-svc-sb5f4 May 11 13:52:37.334: INFO: Got endpoints: latency-svc-sb5f4 [1.673800938s] May 11 13:52:37.406: INFO: Created: latency-svc-crjvh May 11 13:52:37.497: INFO: Got endpoints: latency-svc-crjvh [1.633507765s] May 11 13:52:37.530: INFO: Created: latency-svc-x4qhv May 11 13:52:37.557: INFO: Got endpoints: latency-svc-x4qhv [1.491623483s] May 11 13:52:37.596: INFO: Created: latency-svc-8tnnb May 11 13:52:37.689: INFO: Got endpoints: latency-svc-8tnnb [1.539627495s] May 11 13:52:37.693: INFO: Created: latency-svc-djrt5 May 11 13:52:37.703: INFO: Got endpoints: latency-svc-djrt5 [1.493060915s] May 11 13:52:37.750: INFO: Created: latency-svc-pvwlb May 11 13:52:37.775: INFO: Got endpoints: latency-svc-pvwlb [1.497925791s] May 11 13:52:37.868: INFO: Created: latency-svc-2sx2b May 11 13:52:37.884: INFO: Got endpoints: latency-svc-2sx2b [1.454634916s] May 11 13:52:37.910: INFO: Created: latency-svc-zg8xp May 11 13:52:37.934: INFO: Got endpoints: latency-svc-zg8xp [1.423137079s] May 11 13:52:38.031: INFO: Created: latency-svc-42285 May 11 13:52:38.058: INFO: Got endpoints: latency-svc-42285 [1.417623081s] May 11 13:52:38.094: INFO: Created: latency-svc-vsln7 May 11 13:52:38.123: INFO: Got endpoints: latency-svc-vsln7 [1.454656819s] May 11 13:52:38.186: INFO: Created: latency-svc-85r7d May 11 13:52:38.191: INFO: Got endpoints: latency-svc-85r7d [1.389366085s] May 11 13:52:38.236: INFO: Created: latency-svc-x6d9p May 11 13:52:38.252: INFO: Got endpoints: latency-svc-x6d9p [1.35759708s] May 11 13:52:38.280: INFO: Created: latency-svc-9q9mt May 11 13:52:38.354: INFO: Got endpoints: latency-svc-9q9mt [1.368066602s] May 11 13:52:38.356: INFO: Created: latency-svc-v7t6j May 11 13:52:38.397: INFO: Got endpoints: latency-svc-v7t6j [1.222674069s] May 11 13:52:38.570: INFO: Created: latency-svc-p6s4k May 11 13:52:38.623: INFO: Got endpoints: latency-svc-p6s4k [1.392305454s] May 11 13:52:38.767: INFO: Created: latency-svc-dh5jb May 11 13:52:38.769: INFO: Got endpoints: latency-svc-dh5jb [1.435521459s] May 11 13:52:38.911: INFO: Created: latency-svc-7zk7t May 11 13:52:38.931: INFO: Got endpoints: latency-svc-7zk7t [1.433661259s] May 11 13:52:39.080: INFO: Created: latency-svc-rtlg5 May 11 13:52:39.153: INFO: Got endpoints: latency-svc-rtlg5 [1.596313522s] May 11 13:52:39.156: INFO: Created: latency-svc-9fbrk May 11 13:52:39.265: INFO: Got endpoints: latency-svc-9fbrk [1.57576771s] May 11 13:52:39.276: INFO: Created: latency-svc-wczh6 May 11 13:52:39.328: INFO: Got endpoints: latency-svc-wczh6 [1.625055305s] May 11 13:52:39.494: INFO: Created: latency-svc-nrdrx May 11 13:52:39.977: INFO: Got endpoints: latency-svc-nrdrx [2.20188983s] May 11 13:52:40.015: INFO: Created: latency-svc-l2dcr May 11 13:52:40.054: INFO: Got endpoints: latency-svc-l2dcr [2.170421352s] May 11 13:52:40.225: INFO: Created: latency-svc-mrtjt May 11 13:52:40.258: INFO: Got endpoints: latency-svc-mrtjt [2.323828119s] May 11 13:52:40.295: INFO: Created: latency-svc-6dcvf May 11 13:52:40.306: INFO: Got endpoints: latency-svc-6dcvf [2.247671014s] May 11 13:52:40.426: INFO: Created: latency-svc-n6btx May 11 13:52:40.429: INFO: Got endpoints: latency-svc-n6btx [2.306582023s] May 11 13:52:40.476: INFO: Created: latency-svc-b745x May 11 13:52:40.505: INFO: Got endpoints: latency-svc-b745x [2.313989948s] May 11 13:52:40.576: INFO: Created: latency-svc-5tzk6 May 11 13:52:40.582: INFO: Got endpoints: latency-svc-5tzk6 [2.330474119s] May 11 13:52:40.651: INFO: Created: latency-svc-rcmq5 May 11 13:52:40.750: INFO: Got endpoints: latency-svc-rcmq5 [2.395424907s] May 11 13:52:40.753: INFO: Created: latency-svc-krvhv May 11 13:52:40.774: INFO: Got endpoints: latency-svc-krvhv [2.376880303s] May 11 13:52:40.827: INFO: Created: latency-svc-vwv2t May 11 13:52:40.887: INFO: Got endpoints: latency-svc-vwv2t [2.26381785s] May 11 13:52:40.911: INFO: Created: latency-svc-9nbqc May 11 13:52:40.933: INFO: Got endpoints: latency-svc-9nbqc [2.163667964s] May 11 13:52:40.975: INFO: Created: latency-svc-sdxts May 11 13:52:41.029: INFO: Got endpoints: latency-svc-sdxts [2.098077301s] May 11 13:52:41.069: INFO: Created: latency-svc-tdbf8 May 11 13:52:41.096: INFO: Got endpoints: latency-svc-tdbf8 [1.942608751s] May 11 13:52:41.174: INFO: Created: latency-svc-p2hkr May 11 13:52:41.186: INFO: Got endpoints: latency-svc-p2hkr [1.920735012s] May 11 13:52:41.219: INFO: Created: latency-svc-6kpqj May 11 13:52:41.249: INFO: Got endpoints: latency-svc-6kpqj [1.920751606s] May 11 13:52:41.322: INFO: Created: latency-svc-fnsth May 11 13:52:41.334: INFO: Got endpoints: latency-svc-fnsth [1.356084046s] May 11 13:52:41.373: INFO: Created: latency-svc-mrpcm May 11 13:52:41.462: INFO: Got endpoints: latency-svc-mrpcm [1.40746508s] May 11 13:52:41.475: INFO: Created: latency-svc-lm2xs May 11 13:52:41.554: INFO: Got endpoints: latency-svc-lm2xs [1.295814845s] May 11 13:52:41.630: INFO: Created: latency-svc-s7vg2 May 11 13:52:41.646: INFO: Got endpoints: latency-svc-s7vg2 [1.340121632s] May 11 13:52:41.707: INFO: Created: latency-svc-67bqd May 11 13:52:41.718: INFO: Got endpoints: latency-svc-67bqd [1.289062856s] May 11 13:52:41.791: INFO: Created: latency-svc-tx5ll May 11 13:52:41.824: INFO: Got endpoints: latency-svc-tx5ll [1.319012879s] May 11 13:52:41.989: INFO: Created: latency-svc-65x9k May 11 13:52:41.991: INFO: Got endpoints: latency-svc-65x9k [1.409027617s] May 11 13:52:42.069: INFO: Created: latency-svc-sc6pg May 11 13:52:42.150: INFO: Got endpoints: latency-svc-sc6pg [1.400709972s] May 11 13:52:42.166: INFO: Created: latency-svc-gr4hc May 11 13:52:42.234: INFO: Got endpoints: latency-svc-gr4hc [1.459970615s] May 11 13:52:42.324: INFO: Created: latency-svc-jdrlg May 11 13:52:42.356: INFO: Got endpoints: latency-svc-jdrlg [1.46871052s] May 11 13:52:42.406: INFO: Created: latency-svc-m4ltw May 11 13:52:42.498: INFO: Got endpoints: latency-svc-m4ltw [1.565106058s] May 11 13:52:42.514: INFO: Created: latency-svc-lp8q5 May 11 13:52:42.573: INFO: Got endpoints: latency-svc-lp8q5 [1.543656668s] May 11 13:52:42.684: INFO: Created: latency-svc-cgxz6 May 11 13:52:42.699: INFO: Got endpoints: latency-svc-cgxz6 [1.602784496s] May 11 13:52:42.888: INFO: Created: latency-svc-m57bs May 11 13:52:42.891: INFO: Got endpoints: latency-svc-m57bs [1.704953067s] May 11 13:52:43.133: INFO: Created: latency-svc-w2t2s May 11 13:52:43.167: INFO: Got endpoints: latency-svc-w2t2s [1.918021193s] May 11 13:52:43.204: INFO: Created: latency-svc-s7p2x May 11 13:52:43.342: INFO: Got endpoints: latency-svc-s7p2x [2.008597133s] May 11 13:52:43.344: INFO: Created: latency-svc-h2hbp May 11 13:52:43.378: INFO: Got endpoints: latency-svc-h2hbp [1.91638223s] May 11 13:52:43.620: INFO: Created: latency-svc-7lvmq May 11 13:52:43.626: INFO: Got endpoints: latency-svc-7lvmq [2.071693526s] May 11 13:52:43.684: INFO: Created: latency-svc-wclj6 May 11 13:52:43.767: INFO: Got endpoints: latency-svc-wclj6 [2.120578794s] May 11 13:52:43.795: INFO: Created: latency-svc-g4lcc May 11 13:52:43.846: INFO: Got endpoints: latency-svc-g4lcc [2.127942322s] May 11 13:52:43.929: INFO: Created: latency-svc-xl8rz May 11 13:52:43.937: INFO: Got endpoints: latency-svc-xl8rz [2.112490924s] May 11 13:52:43.982: INFO: Created: latency-svc-n2x4x May 11 13:52:43.997: INFO: Got endpoints: latency-svc-n2x4x [2.005928781s] May 11 13:52:44.023: INFO: Created: latency-svc-gvm5q May 11 13:52:44.078: INFO: Got endpoints: latency-svc-gvm5q [1.927839485s] May 11 13:52:44.108: INFO: Created: latency-svc-q79sv May 11 13:52:44.135: INFO: Got endpoints: latency-svc-q79sv [1.901321671s] May 11 13:52:44.234: INFO: Created: latency-svc-54tgw May 11 13:52:44.262: INFO: Got endpoints: latency-svc-54tgw [1.905908644s] May 11 13:52:44.336: INFO: Created: latency-svc-46h2f May 11 13:52:44.408: INFO: Got endpoints: latency-svc-46h2f [1.909396882s] May 11 13:52:44.415: INFO: Created: latency-svc-8j858 May 11 13:52:44.424: INFO: Got endpoints: latency-svc-8j858 [1.851079471s] May 11 13:52:44.588: INFO: Created: latency-svc-fqvws May 11 13:52:44.598: INFO: Got endpoints: latency-svc-fqvws [1.898902764s] May 11 13:52:44.641: INFO: Created: latency-svc-ksbdq May 11 13:52:44.653: INFO: Got endpoints: latency-svc-ksbdq [1.761951025s] May 11 13:52:44.809: INFO: Created: latency-svc-zctk2 May 11 13:52:44.874: INFO: Created: latency-svc-jvbn7 May 11 13:52:44.874: INFO: Got endpoints: latency-svc-zctk2 [1.706316991s] May 11 13:52:44.900: INFO: Got endpoints: latency-svc-jvbn7 [1.557381296s] May 11 13:52:44.975: INFO: Created: latency-svc-4j5xp May 11 13:52:44.996: INFO: Got endpoints: latency-svc-4j5xp [1.617273756s] May 11 13:52:45.133: INFO: Created: latency-svc-k822x May 11 13:52:45.135: INFO: Got endpoints: latency-svc-k822x [1.509342592s] May 11 13:52:45.312: INFO: Created: latency-svc-qlvpv May 11 13:52:45.315: INFO: Got endpoints: latency-svc-qlvpv [1.547886577s] May 11 13:52:45.394: INFO: Created: latency-svc-bv4n4 May 11 13:52:45.516: INFO: Got endpoints: latency-svc-bv4n4 [1.669253264s] May 11 13:52:45.534: INFO: Created: latency-svc-lkhgj May 11 13:52:45.572: INFO: Got endpoints: latency-svc-lkhgj [1.63553823s] May 11 13:52:45.607: INFO: Created: latency-svc-h4nx2 May 11 13:52:45.737: INFO: Got endpoints: latency-svc-h4nx2 [1.740063152s] May 11 13:52:45.743: INFO: Created: latency-svc-hbvk7 May 11 13:52:45.801: INFO: Got endpoints: latency-svc-hbvk7 [1.723064919s] May 11 13:52:45.911: INFO: Created: latency-svc-4k5tm May 11 13:52:45.927: INFO: Got endpoints: latency-svc-4k5tm [1.792014556s] May 11 13:52:45.990: INFO: Created: latency-svc-jrnm6 May 11 13:52:46.090: INFO: Got endpoints: latency-svc-jrnm6 [1.828548491s] May 11 13:52:46.092: INFO: Created: latency-svc-k5cfg May 11 13:52:46.102: INFO: Got endpoints: latency-svc-k5cfg [1.693940108s] May 11 13:52:46.153: INFO: Created: latency-svc-nnqgn May 11 13:52:46.288: INFO: Got endpoints: latency-svc-nnqgn [1.863775993s] May 11 13:52:46.290: INFO: Created: latency-svc-7lcjr May 11 13:52:46.348: INFO: Got endpoints: latency-svc-7lcjr [1.750138238s] May 11 13:52:46.492: INFO: Created: latency-svc-dl2zr May 11 13:52:46.494: INFO: Got endpoints: latency-svc-dl2zr [1.841607405s] May 11 13:52:46.589: INFO: Created: latency-svc-q5vqb May 11 13:52:46.743: INFO: Got endpoints: latency-svc-q5vqb [1.869615457s] May 11 13:52:46.761: INFO: Created: latency-svc-vwwcq May 11 13:52:46.803: INFO: Got endpoints: latency-svc-vwwcq [1.903164546s] May 11 13:52:46.803: INFO: Latencies: [178.393107ms 212.404998ms 273.177152ms 352.030979ms 416.14063ms 512.066289ms 594.177541ms 697.186934ms 878.875696ms 1.029158944s 1.089846731s 1.192488717s 1.222674069s 1.276508339s 1.289062856s 1.295814845s 1.319012879s 1.336531623s 1.340121632s 1.348128337s 1.356084046s 1.35759708s 1.368066602s 1.38338042s 1.389366085s 1.392305454s 1.400709972s 1.40746508s 1.409027617s 1.417623081s 1.423137079s 1.433661259s 1.435521459s 1.454634916s 1.454656819s 1.459970615s 1.46871052s 1.475727064s 1.477456885s 1.491623483s 1.493060915s 1.494792483s 1.497925791s 1.509342592s 1.513206612s 1.539627495s 1.543656668s 1.547886577s 1.557381296s 1.565106058s 1.567252019s 1.57576771s 1.596313522s 1.602784496s 1.606895069s 1.606980748s 1.608127444s 1.617273756s 1.62484657s 1.625055305s 1.633507765s 1.63553823s 1.641914152s 1.646634533s 1.654555925s 1.662257451s 1.669253264s 1.673800938s 1.676217155s 1.689089292s 1.693940108s 1.696334613s 1.69671399s 1.704149521s 1.704953067s 1.706316991s 1.710724717s 1.723064919s 1.734161569s 1.736424509s 1.740063152s 1.743389736s 1.74668316s 1.750138238s 1.753318255s 1.755814833s 1.756089712s 1.756331547s 1.756776334s 1.757023381s 1.761951025s 1.762569026s 1.765671073s 1.771639195s 1.772460807s 1.773339452s 1.78067851s 1.781423023s 1.785983847s 1.792014556s 1.794990409s 1.805788957s 1.813844704s 1.819792371s 1.819888968s 1.821059211s 1.824695577s 1.827140602s 1.828548491s 1.829867399s 1.830020788s 1.830054774s 1.833404577s 1.841607405s 1.846225899s 1.846814998s 1.84784244s 1.851079471s 1.863429971s 1.863775993s 1.863908069s 1.868866873s 1.869615457s 1.87089493s 1.871241892s 1.87390087s 1.879439423s 1.881422209s 1.883567791s 1.886460721s 1.892579467s 1.898902764s 1.900468309s 1.901321671s 1.903164546s 1.905908644s 1.906520979s 1.909396882s 1.91638223s 1.918021193s 1.918084536s 1.920735012s 1.920751606s 1.922602198s 1.923771144s 1.927839485s 1.936602196s 1.942608751s 1.944533163s 1.946102276s 1.947822211s 1.949177143s 1.95108329s 1.953272993s 1.966799801s 1.975511589s 1.980585053s 1.992284862s 2.005928781s 2.008597133s 2.012212536s 2.014258284s 2.0222548s 2.071693526s 2.085672622s 2.091061004s 2.098077301s 2.10649614s 2.107106486s 2.112490924s 2.116064275s 2.117076289s 2.120578794s 2.127942322s 2.148494013s 2.163667964s 2.170421352s 2.20188983s 2.236999719s 2.247671014s 2.26381785s 2.29439962s 2.306582023s 2.313989948s 2.323828119s 2.330474119s 2.367930793s 2.376880303s 2.390564165s 2.395424907s 2.402050443s 2.425458369s 2.442742426s 2.504812223s 2.576486222s 2.594966303s 2.598766105s 2.599837309s 2.624707449s 2.65469829s] May 11 13:52:46.803: INFO: 50 %ile: 1.794990409s May 11 13:52:46.803: INFO: 90 %ile: 2.26381785s May 11 13:52:46.803: INFO: 99 %ile: 2.624707449s May 11 13:52:46.803: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:52:46.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1237" for this suite. May 11 13:53:53.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:53:53.211: INFO: namespace svc-latency-1237 deletion completed in 1m6.233235079s • [SLOW TEST:100.528 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:53:53.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 11 13:53:53.415: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9347,SelfLink:/api/v1/namespaces/watch-9347/configmaps/e2e-watch-test-watch-closed,UID:352d8146-89da-4c0b-b307-cf0278d16ec4,ResourceVersion:10256617,Generation:0,CreationTimestamp:2020-05-11 13:53:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 13:53:53.415: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9347,SelfLink:/api/v1/namespaces/watch-9347/configmaps/e2e-watch-test-watch-closed,UID:352d8146-89da-4c0b-b307-cf0278d16ec4,ResourceVersion:10256618,Generation:0,CreationTimestamp:2020-05-11 13:53:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 11 13:53:53.426: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9347,SelfLink:/api/v1/namespaces/watch-9347/configmaps/e2e-watch-test-watch-closed,UID:352d8146-89da-4c0b-b307-cf0278d16ec4,ResourceVersion:10256619,Generation:0,CreationTimestamp:2020-05-11 13:53:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 13:53:53.426: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9347,SelfLink:/api/v1/namespaces/watch-9347/configmaps/e2e-watch-test-watch-closed,UID:352d8146-89da-4c0b-b307-cf0278d16ec4,ResourceVersion:10256620,Generation:0,CreationTimestamp:2020-05-11 13:53:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:53:53.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9347" for this suite. May 11 13:53:59.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:53:59.563: INFO: namespace watch-9347 deletion completed in 6.075991104s • [SLOW TEST:6.351 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:53:59.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 13:54:08.839: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:54:08.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2291" for this suite. May 11 13:54:14.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:54:15.104: INFO: namespace container-runtime-2291 deletion completed in 6.189035055s • [SLOW TEST:15.540 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:54:15.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-625a6e98-13e1-4eb3-8986-c00d01cae872 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:54:15.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7986" for this suite. May 11 13:54:21.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:54:21.592: INFO: namespace secrets-7986 deletion completed in 6.127759067s • [SLOW TEST:6.488 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:54:21.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-86545b9f-1aba-4979-aadb-df9cb5082e1b in namespace container-probe-202 May 11 13:54:31.116: INFO: Started pod liveness-86545b9f-1aba-4979-aadb-df9cb5082e1b in namespace container-probe-202 STEP: checking the pod's current state and verifying that restartCount is present May 11 13:54:31.119: INFO: Initial restart count of pod liveness-86545b9f-1aba-4979-aadb-df9cb5082e1b is 0 May 11 13:54:51.182: INFO: Restart count of pod container-probe-202/liveness-86545b9f-1aba-4979-aadb-df9cb5082e1b is now 1 (20.062833147s elapsed) May 11 13:55:09.431: INFO: Restart count of pod container-probe-202/liveness-86545b9f-1aba-4979-aadb-df9cb5082e1b is now 2 (38.311833135s elapsed) May 11 13:55:29.606: INFO: Restart count of pod container-probe-202/liveness-86545b9f-1aba-4979-aadb-df9cb5082e1b is now 3 (58.487110276s elapsed) May 11 13:55:49.890: INFO: Restart count of pod container-probe-202/liveness-86545b9f-1aba-4979-aadb-df9cb5082e1b is now 4 (1m18.770232046s elapsed) May 11 13:57:02.153: INFO: Restart count of pod container-probe-202/liveness-86545b9f-1aba-4979-aadb-df9cb5082e1b is now 5 (2m31.033976621s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:57:02.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-202" for this suite. May 11 13:57:10.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:57:10.625: INFO: namespace container-probe-202 deletion completed in 8.303090345s • [SLOW TEST:169.032 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:57:10.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-ed36ea82-823d-41c2-a5c3-500d1ac7fd92 STEP: Creating a pod to test consume secrets May 11 13:57:10.982: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d5ab994a-e88e-4541-af87-784249c80860" in namespace "projected-3511" to be "success or failure" May 11 13:57:11.055: INFO: Pod "pod-projected-secrets-d5ab994a-e88e-4541-af87-784249c80860": Phase="Pending", Reason="", readiness=false. Elapsed: 73.475102ms May 11 13:57:13.059: INFO: Pod "pod-projected-secrets-d5ab994a-e88e-4541-af87-784249c80860": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07748146s May 11 13:57:15.342: INFO: Pod "pod-projected-secrets-d5ab994a-e88e-4541-af87-784249c80860": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360419809s May 11 13:57:17.406: INFO: Pod "pod-projected-secrets-d5ab994a-e88e-4541-af87-784249c80860": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.424288491s STEP: Saw pod success May 11 13:57:17.406: INFO: Pod "pod-projected-secrets-d5ab994a-e88e-4541-af87-784249c80860" satisfied condition "success or failure" May 11 13:57:17.408: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-d5ab994a-e88e-4541-af87-784249c80860 container projected-secret-volume-test: STEP: delete the pod May 11 13:57:17.461: INFO: Waiting for pod pod-projected-secrets-d5ab994a-e88e-4541-af87-784249c80860 to disappear May 11 13:57:17.544: INFO: Pod pod-projected-secrets-d5ab994a-e88e-4541-af87-784249c80860 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:57:17.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3511" for this suite. May 11 13:57:23.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:57:23.668: INFO: namespace projected-3511 deletion completed in 6.120642652s • [SLOW TEST:13.042 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:57:23.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-5de41e5d-31a2-463f-9f51-7ac51526802c STEP: Creating a pod to test consume configMaps May 11 13:57:23.948: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b8b4d5a5-ad8d-449b-b1cc-29a62bc6fee7" in namespace "projected-99" to be "success or failure" May 11 13:57:23.987: INFO: Pod "pod-projected-configmaps-b8b4d5a5-ad8d-449b-b1cc-29a62bc6fee7": Phase="Pending", Reason="", readiness=false. Elapsed: 39.64146ms May 11 13:57:25.991: INFO: Pod "pod-projected-configmaps-b8b4d5a5-ad8d-449b-b1cc-29a62bc6fee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043218591s May 11 13:57:27.994: INFO: Pod "pod-projected-configmaps-b8b4d5a5-ad8d-449b-b1cc-29a62bc6fee7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046328333s May 11 13:57:29.998: INFO: Pod "pod-projected-configmaps-b8b4d5a5-ad8d-449b-b1cc-29a62bc6fee7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050334981s May 11 13:57:32.002: INFO: Pod "pod-projected-configmaps-b8b4d5a5-ad8d-449b-b1cc-29a62bc6fee7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05387241s STEP: Saw pod success May 11 13:57:32.002: INFO: Pod "pod-projected-configmaps-b8b4d5a5-ad8d-449b-b1cc-29a62bc6fee7" satisfied condition "success or failure" May 11 13:57:32.004: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-b8b4d5a5-ad8d-449b-b1cc-29a62bc6fee7 container projected-configmap-volume-test: STEP: delete the pod May 11 13:57:32.069: INFO: Waiting for pod pod-projected-configmaps-b8b4d5a5-ad8d-449b-b1cc-29a62bc6fee7 to disappear May 11 13:57:32.099: INFO: Pod pod-projected-configmaps-b8b4d5a5-ad8d-449b-b1cc-29a62bc6fee7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:57:32.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-99" for this suite. May 11 13:57:38.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:57:38.342: INFO: namespace projected-99 deletion completed in 6.21205855s • [SLOW TEST:14.673 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:57:38.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6257 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6257 STEP: Creating statefulset with conflicting port in namespace statefulset-6257 STEP: Waiting until pod test-pod will start running in namespace statefulset-6257 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6257 May 11 13:57:49.188: INFO: Observed stateful pod in namespace: statefulset-6257, name: ss-0, uid: 9b7690f5-063c-4ac7-9e40-4ecfd1c654f9, status phase: Pending. Waiting for statefulset controller to delete. May 11 13:57:52.176: INFO: Observed stateful pod in namespace: statefulset-6257, name: ss-0, uid: 9b7690f5-063c-4ac7-9e40-4ecfd1c654f9, status phase: Failed. Waiting for statefulset controller to delete. May 11 13:57:52.204: INFO: Observed stateful pod in namespace: statefulset-6257, name: ss-0, uid: 9b7690f5-063c-4ac7-9e40-4ecfd1c654f9, status phase: Failed. Waiting for statefulset controller to delete. May 11 13:57:52.342: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6257 STEP: Removing pod with conflicting port in namespace statefulset-6257 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6257 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 11 13:58:03.462: INFO: Deleting all statefulset in ns statefulset-6257 May 11 13:58:03.465: INFO: Scaling statefulset ss to 0 May 11 13:58:13.567: INFO: Waiting for statefulset status.replicas updated to 0 May 11 13:58:13.569: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:58:13.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6257" for this suite. May 11 13:58:21.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:58:21.773: INFO: namespace statefulset-6257 deletion completed in 8.157525729s • [SLOW TEST:43.430 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:58:21.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 13:58:27.975: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:58:28.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9847" for this suite. May 11 13:58:35.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:58:35.440: INFO: namespace container-runtime-9847 deletion completed in 6.83522358s • [SLOW TEST:13.666 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:58:35.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-8640a1d6-6a90-4369-b0da-404a892011f2 in namespace container-probe-8176 May 11 13:58:41.664: INFO: Started pod liveness-8640a1d6-6a90-4369-b0da-404a892011f2 in namespace container-probe-8176 STEP: checking the pod's current state and verifying that restartCount is present May 11 13:58:41.667: INFO: Initial restart count of pod liveness-8640a1d6-6a90-4369-b0da-404a892011f2 is 0 May 11 13:59:04.370: INFO: Restart count of pod container-probe-8176/liveness-8640a1d6-6a90-4369-b0da-404a892011f2 is now 1 (22.703137648s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:59:04.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8176" for this suite. May 11 13:59:10.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 13:59:10.769: INFO: namespace container-probe-8176 deletion completed in 6.258332172s • [SLOW TEST:35.329 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 13:59:10.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 13:59:19.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6207" for this suite. May 11 14:00:05.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:00:05.549: INFO: namespace kubelet-test-6207 deletion completed in 46.087896186s • [SLOW TEST:54.781 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:00:05.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 14:00:05.717: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:00:13.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7008" for this suite. May 11 14:01:03.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:01:04.053: INFO: namespace pods-7008 deletion completed in 50.101608084s • [SLOW TEST:58.503 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:01:04.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 11 14:01:04.423: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 11 14:01:05.291: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 11 14:01:07.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724802465, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724802465, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724802465, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724802465, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:01:09.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724802465, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724802465, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724802465, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724802465, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:01:11.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724802465, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724802465, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724802465, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724802465, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:01:14.670: INFO: Waited 724.349976ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:01:17.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3907" for this suite. May 11 14:01:23.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:01:24.033: INFO: namespace aggregator-3907 deletion completed in 6.558830654s • [SLOW TEST:19.979 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:01:24.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-1a15ec18-b6ce-4dcc-8a2d-8bce72df47f0 STEP: Creating a pod to test consume secrets May 11 14:01:24.398: INFO: Waiting up to 5m0s for pod "pod-secrets-d2c890b7-145a-4243-85c4-ed321353358c" in namespace "secrets-3447" to be "success or failure" May 11 14:01:24.431: INFO: Pod "pod-secrets-d2c890b7-145a-4243-85c4-ed321353358c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.892722ms May 11 14:01:26.434: INFO: Pod "pod-secrets-d2c890b7-145a-4243-85c4-ed321353358c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035877154s May 11 14:01:28.625: INFO: Pod "pod-secrets-d2c890b7-145a-4243-85c4-ed321353358c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227755046s May 11 14:01:30.629: INFO: Pod "pod-secrets-d2c890b7-145a-4243-85c4-ed321353358c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.230821214s STEP: Saw pod success May 11 14:01:30.629: INFO: Pod "pod-secrets-d2c890b7-145a-4243-85c4-ed321353358c" satisfied condition "success or failure" May 11 14:01:30.631: INFO: Trying to get logs from node iruya-worker pod pod-secrets-d2c890b7-145a-4243-85c4-ed321353358c container secret-volume-test: STEP: delete the pod May 11 14:01:30.878: INFO: Waiting for pod pod-secrets-d2c890b7-145a-4243-85c4-ed321353358c to disappear May 11 14:01:30.924: INFO: Pod pod-secrets-d2c890b7-145a-4243-85c4-ed321353358c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:01:30.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3447" for this suite. May 11 14:01:37.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:01:37.171: INFO: namespace secrets-3447 deletion completed in 6.243665136s • [SLOW TEST:13.138 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:01:37.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-pmwm STEP: Creating a pod to test atomic-volume-subpath May 11 14:01:37.662: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-pmwm" in namespace "subpath-1030" to be "success or failure" May 11 14:01:37.703: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Pending", Reason="", readiness=false. Elapsed: 41.009909ms May 11 14:01:39.708: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045847878s May 11 14:01:41.727: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06492102s May 11 14:01:43.730: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067558812s May 11 14:01:45.733: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Running", Reason="", readiness=true. Elapsed: 8.070689222s May 11 14:01:47.736: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Running", Reason="", readiness=true. Elapsed: 10.07379171s May 11 14:01:49.740: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Running", Reason="", readiness=true. Elapsed: 12.077281786s May 11 14:01:51.744: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Running", Reason="", readiness=true. Elapsed: 14.081621075s May 11 14:01:53.748: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Running", Reason="", readiness=true. Elapsed: 16.085310742s May 11 14:01:55.751: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Running", Reason="", readiness=true. Elapsed: 18.088967974s May 11 14:01:57.755: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Running", Reason="", readiness=true. Elapsed: 20.092874424s May 11 14:01:59.759: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Running", Reason="", readiness=true. Elapsed: 22.096468455s May 11 14:02:01.762: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Running", Reason="", readiness=true. Elapsed: 24.099610036s May 11 14:02:04.082: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Running", Reason="", readiness=true. Elapsed: 26.419175113s May 11 14:02:06.085: INFO: Pod "pod-subpath-test-downwardapi-pmwm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.423049706s STEP: Saw pod success May 11 14:02:06.085: INFO: Pod "pod-subpath-test-downwardapi-pmwm" satisfied condition "success or failure" May 11 14:02:06.088: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-pmwm container test-container-subpath-downwardapi-pmwm: STEP: delete the pod May 11 14:02:06.140: INFO: Waiting for pod pod-subpath-test-downwardapi-pmwm to disappear May 11 14:02:06.249: INFO: Pod pod-subpath-test-downwardapi-pmwm no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-pmwm May 11 14:02:06.249: INFO: Deleting pod "pod-subpath-test-downwardapi-pmwm" in namespace "subpath-1030" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:02:06.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1030" for this suite. May 11 14:02:14.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:02:14.448: INFO: namespace subpath-1030 deletion completed in 8.192081483s • [SLOW TEST:37.276 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:02:14.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 14:02:14.522: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 11 14:02:16.688: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:02:17.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4554" for this suite. May 11 14:02:25.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:02:25.952: INFO: namespace replication-controller-4554 deletion completed in 8.17797506s • [SLOW TEST:11.504 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:02:25.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 14:02:26.277: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 6.691876ms) May 11 14:02:26.281: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.970593ms) May 11 14:02:26.287: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.10435ms) May 11 14:02:26.290: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.34358ms) May 11 14:02:26.294: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.815627ms) May 11 14:02:26.298: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.714725ms) May 11 14:02:26.301: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.861654ms) May 11 14:02:26.304: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.436418ms) May 11 14:02:26.306: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.893451ms) May 11 14:02:26.307: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.543988ms) May 11 14:02:26.309: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.042122ms) May 11 14:02:26.311: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.658162ms) May 11 14:02:26.313: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.844456ms) May 11 14:02:26.316: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.521299ms) May 11 14:02:26.318: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.00407ms) May 11 14:02:26.319: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.929206ms) May 11 14:02:26.322: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.047599ms) May 11 14:02:26.323: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.733751ms) May 11 14:02:26.325: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.907148ms) May 11 14:02:26.327: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.886148ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:02:26.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-942" for this suite. May 11 14:02:32.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:02:32.410: INFO: namespace proxy-942 deletion completed in 6.080591636s • [SLOW TEST:6.458 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:02:32.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6548 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 11 14:02:32.591: INFO: Found 0 stateful pods, waiting for 3 May 11 14:02:42.595: INFO: Found 2 stateful pods, waiting for 3 May 11 14:02:52.595: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 14:02:52.595: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 14:02:52.595: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 11 14:02:52.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6548 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 14:02:57.214: INFO: stderr: "I0511 14:02:57.040610 2700 log.go:172] (0xc000808a50) (0xc000b5a3c0) Create stream\nI0511 14:02:57.040658 2700 log.go:172] (0xc000808a50) (0xc000b5a3c0) Stream added, broadcasting: 1\nI0511 14:02:57.043379 2700 log.go:172] (0xc000808a50) Reply frame received for 1\nI0511 14:02:57.043405 2700 log.go:172] (0xc000808a50) (0xc000801ea0) Create stream\nI0511 14:02:57.043412 2700 log.go:172] (0xc000808a50) (0xc000801ea0) Stream added, broadcasting: 3\nI0511 14:02:57.044102 2700 log.go:172] (0xc000808a50) Reply frame received for 3\nI0511 14:02:57.044126 2700 log.go:172] (0xc000808a50) (0xc0005c2a00) Create stream\nI0511 14:02:57.044135 2700 log.go:172] (0xc000808a50) (0xc0005c2a00) Stream added, broadcasting: 5\nI0511 14:02:57.044787 2700 log.go:172] (0xc000808a50) Reply frame received for 5\nI0511 14:02:57.123710 2700 log.go:172] (0xc000808a50) Data frame received for 5\nI0511 14:02:57.123731 2700 log.go:172] (0xc0005c2a00) (5) Data frame handling\nI0511 14:02:57.123745 2700 log.go:172] (0xc0005c2a00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 14:02:57.198286 2700 log.go:172] (0xc000808a50) Data frame received for 3\nI0511 14:02:57.198316 2700 log.go:172] (0xc000801ea0) (3) Data frame handling\nI0511 14:02:57.198324 2700 log.go:172] (0xc000801ea0) (3) Data frame sent\nI0511 14:02:57.198387 2700 log.go:172] (0xc000808a50) Data frame received for 5\nI0511 14:02:57.198467 2700 log.go:172] (0xc0005c2a00) (5) Data frame handling\nI0511 14:02:57.198519 2700 log.go:172] (0xc000808a50) Data frame received for 3\nI0511 14:02:57.198540 2700 log.go:172] (0xc000801ea0) (3) Data frame handling\nI0511 14:02:57.209936 2700 log.go:172] (0xc000808a50) Data frame received for 1\nI0511 14:02:57.209960 2700 log.go:172] (0xc000b5a3c0) (1) Data frame handling\nI0511 14:02:57.209970 2700 log.go:172] (0xc000b5a3c0) (1) Data frame sent\nI0511 14:02:57.209982 2700 log.go:172] (0xc000808a50) (0xc000b5a3c0) Stream removed, broadcasting: 1\nI0511 14:02:57.210229 2700 log.go:172] (0xc000808a50) (0xc000b5a3c0) Stream removed, broadcasting: 1\nI0511 14:02:57.210242 2700 log.go:172] (0xc000808a50) (0xc000801ea0) Stream removed, broadcasting: 3\nI0511 14:02:57.210249 2700 log.go:172] (0xc000808a50) (0xc0005c2a00) Stream removed, broadcasting: 5\n" May 11 14:02:57.214: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 14:02:57.214: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 11 14:03:07.243: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 11 14:03:17.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6548 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:03:17.904: INFO: stderr: "I0511 14:03:17.835767 2729 log.go:172] (0xc0009889a0) (0xc000944be0) Create stream\nI0511 14:03:17.835827 2729 log.go:172] (0xc0009889a0) (0xc000944be0) Stream added, broadcasting: 1\nI0511 14:03:17.839235 2729 log.go:172] (0xc0009889a0) Reply frame received for 1\nI0511 14:03:17.839279 2729 log.go:172] (0xc0009889a0) (0xc000944000) Create stream\nI0511 14:03:17.839295 2729 log.go:172] (0xc0009889a0) (0xc000944000) Stream added, broadcasting: 3\nI0511 14:03:17.840162 2729 log.go:172] (0xc0009889a0) Reply frame received for 3\nI0511 14:03:17.840205 2729 log.go:172] (0xc0009889a0) (0xc0009440a0) Create stream\nI0511 14:03:17.840219 2729 log.go:172] (0xc0009889a0) (0xc0009440a0) Stream added, broadcasting: 5\nI0511 14:03:17.841271 2729 log.go:172] (0xc0009889a0) Reply frame received for 5\nI0511 14:03:17.898847 2729 log.go:172] (0xc0009889a0) Data frame received for 5\nI0511 14:03:17.898877 2729 log.go:172] (0xc0009440a0) (5) Data frame handling\nI0511 14:03:17.898894 2729 log.go:172] (0xc0009440a0) (5) Data frame sent\nI0511 14:03:17.898910 2729 log.go:172] (0xc0009889a0) Data frame received for 5\nI0511 14:03:17.898922 2729 log.go:172] (0xc0009440a0) (5) Data frame handling\nI0511 14:03:17.898936 2729 log.go:172] (0xc0009889a0) Data frame received for 3\nI0511 14:03:17.898945 2729 log.go:172] (0xc000944000) (3) Data frame handling\nI0511 14:03:17.898955 2729 log.go:172] (0xc000944000) (3) Data frame sent\nI0511 14:03:17.898967 2729 log.go:172] (0xc0009889a0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 14:03:17.898989 2729 log.go:172] (0xc000944000) (3) Data frame handling\nI0511 14:03:17.900014 2729 log.go:172] (0xc0009889a0) Data frame received for 1\nI0511 14:03:17.900038 2729 log.go:172] (0xc000944be0) (1) Data frame handling\nI0511 14:03:17.900052 2729 log.go:172] (0xc000944be0) (1) Data frame sent\nI0511 14:03:17.900061 2729 log.go:172] (0xc0009889a0) (0xc000944be0) Stream removed, broadcasting: 1\nI0511 14:03:17.900254 2729 log.go:172] (0xc0009889a0) Go away received\nI0511 14:03:17.900419 2729 log.go:172] (0xc0009889a0) (0xc000944be0) Stream removed, broadcasting: 1\nI0511 14:03:17.900439 2729 log.go:172] (0xc0009889a0) (0xc000944000) Stream removed, broadcasting: 3\nI0511 14:03:17.900447 2729 log.go:172] (0xc0009889a0) (0xc0009440a0) Stream removed, broadcasting: 5\n" May 11 14:03:17.904: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 14:03:17.904: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 14:03:27.921: INFO: Waiting for StatefulSet statefulset-6548/ss2 to complete update May 11 14:03:27.921: INFO: Waiting for Pod statefulset-6548/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 14:03:27.921: INFO: Waiting for Pod statefulset-6548/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 14:03:38.158: INFO: Waiting for StatefulSet statefulset-6548/ss2 to complete update May 11 14:03:38.158: INFO: Waiting for Pod statefulset-6548/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 14:03:47.929: INFO: Waiting for StatefulSet statefulset-6548/ss2 to complete update May 11 14:03:47.929: INFO: Waiting for Pod statefulset-6548/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 14:03:57.942: INFO: Waiting for StatefulSet statefulset-6548/ss2 to complete update STEP: Rolling back to a previous revision May 11 14:04:07.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6548 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 14:04:08.206: INFO: stderr: "I0511 14:04:08.065337 2749 log.go:172] (0xc000118fd0) (0xc0005b8b40) Create stream\nI0511 14:04:08.065391 2749 log.go:172] (0xc000118fd0) (0xc0005b8b40) Stream added, broadcasting: 1\nI0511 14:04:08.067772 2749 log.go:172] (0xc000118fd0) Reply frame received for 1\nI0511 14:04:08.067818 2749 log.go:172] (0xc000118fd0) (0xc00083a000) Create stream\nI0511 14:04:08.067831 2749 log.go:172] (0xc000118fd0) (0xc00083a000) Stream added, broadcasting: 3\nI0511 14:04:08.068669 2749 log.go:172] (0xc000118fd0) Reply frame received for 3\nI0511 14:04:08.068729 2749 log.go:172] (0xc000118fd0) (0xc0005b8be0) Create stream\nI0511 14:04:08.068752 2749 log.go:172] (0xc000118fd0) (0xc0005b8be0) Stream added, broadcasting: 5\nI0511 14:04:08.069913 2749 log.go:172] (0xc000118fd0) Reply frame received for 5\nI0511 14:04:08.158412 2749 log.go:172] (0xc000118fd0) Data frame received for 5\nI0511 14:04:08.158440 2749 log.go:172] (0xc0005b8be0) (5) Data frame handling\nI0511 14:04:08.158455 2749 log.go:172] (0xc0005b8be0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 14:04:08.198245 2749 log.go:172] (0xc000118fd0) Data frame received for 5\nI0511 14:04:08.198273 2749 log.go:172] (0xc0005b8be0) (5) Data frame handling\nI0511 14:04:08.198311 2749 log.go:172] (0xc000118fd0) Data frame received for 3\nI0511 14:04:08.198336 2749 log.go:172] (0xc00083a000) (3) Data frame handling\nI0511 14:04:08.198360 2749 log.go:172] (0xc00083a000) (3) Data frame sent\nI0511 14:04:08.198374 2749 log.go:172] (0xc000118fd0) Data frame received for 3\nI0511 14:04:08.198384 2749 log.go:172] (0xc00083a000) (3) Data frame handling\nI0511 14:04:08.200156 2749 log.go:172] (0xc000118fd0) Data frame received for 1\nI0511 14:04:08.200174 2749 log.go:172] (0xc0005b8b40) (1) Data frame handling\nI0511 14:04:08.200199 2749 log.go:172] (0xc0005b8b40) (1) Data frame sent\nI0511 14:04:08.200216 2749 log.go:172] (0xc000118fd0) (0xc0005b8b40) Stream removed, broadcasting: 1\nI0511 14:04:08.200240 2749 log.go:172] (0xc000118fd0) Go away received\nI0511 14:04:08.203201 2749 log.go:172] (0xc000118fd0) (0xc0005b8b40) Stream removed, broadcasting: 1\nI0511 14:04:08.203217 2749 log.go:172] (0xc000118fd0) (0xc00083a000) Stream removed, broadcasting: 3\nI0511 14:04:08.203225 2749 log.go:172] (0xc000118fd0) (0xc0005b8be0) Stream removed, broadcasting: 5\n" May 11 14:04:08.206: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 14:04:08.206: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 14:04:18.236: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 11 14:04:28.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6548 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:04:28.493: INFO: stderr: "I0511 14:04:28.428937 2769 log.go:172] (0xc000520630) (0xc0005d8aa0) Create stream\nI0511 14:04:28.428987 2769 log.go:172] (0xc000520630) (0xc0005d8aa0) Stream added, broadcasting: 1\nI0511 14:04:28.431048 2769 log.go:172] (0xc000520630) Reply frame received for 1\nI0511 14:04:28.431382 2769 log.go:172] (0xc000520630) (0xc000976000) Create stream\nI0511 14:04:28.431425 2769 log.go:172] (0xc000520630) (0xc000976000) Stream added, broadcasting: 3\nI0511 14:04:28.433363 2769 log.go:172] (0xc000520630) Reply frame received for 3\nI0511 14:04:28.433446 2769 log.go:172] (0xc000520630) (0xc000a10000) Create stream\nI0511 14:04:28.433482 2769 log.go:172] (0xc000520630) (0xc000a10000) Stream added, broadcasting: 5\nI0511 14:04:28.434517 2769 log.go:172] (0xc000520630) Reply frame received for 5\nI0511 14:04:28.485684 2769 log.go:172] (0xc000520630) Data frame received for 5\nI0511 14:04:28.485706 2769 log.go:172] (0xc000a10000) (5) Data frame handling\nI0511 14:04:28.485718 2769 log.go:172] (0xc000a10000) (5) Data frame sent\nI0511 14:04:28.485725 2769 log.go:172] (0xc000520630) Data frame received for 5\nI0511 14:04:28.485729 2769 log.go:172] (0xc000a10000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 14:04:28.485747 2769 log.go:172] (0xc000520630) Data frame received for 3\nI0511 14:04:28.485752 2769 log.go:172] (0xc000976000) (3) Data frame handling\nI0511 14:04:28.485758 2769 log.go:172] (0xc000976000) (3) Data frame sent\nI0511 14:04:28.485764 2769 log.go:172] (0xc000520630) Data frame received for 3\nI0511 14:04:28.485771 2769 log.go:172] (0xc000976000) (3) Data frame handling\nI0511 14:04:28.487658 2769 log.go:172] (0xc000520630) Data frame received for 1\nI0511 14:04:28.487685 2769 log.go:172] (0xc0005d8aa0) (1) Data frame handling\nI0511 14:04:28.487705 2769 log.go:172] (0xc0005d8aa0) (1) Data frame sent\nI0511 14:04:28.487723 2769 log.go:172] (0xc000520630) (0xc0005d8aa0) Stream removed, broadcasting: 1\nI0511 14:04:28.487747 2769 log.go:172] (0xc000520630) Go away received\nI0511 14:04:28.488077 2769 log.go:172] (0xc000520630) (0xc0005d8aa0) Stream removed, broadcasting: 1\nI0511 14:04:28.488103 2769 log.go:172] (0xc000520630) (0xc000976000) Stream removed, broadcasting: 3\nI0511 14:04:28.488115 2769 log.go:172] (0xc000520630) (0xc000a10000) Stream removed, broadcasting: 5\n" May 11 14:04:28.493: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 14:04:28.493: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 14:04:38.513: INFO: Waiting for StatefulSet statefulset-6548/ss2 to complete update May 11 14:04:38.513: INFO: Waiting for Pod statefulset-6548/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 11 14:04:38.513: INFO: Waiting for Pod statefulset-6548/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 11 14:04:48.520: INFO: Waiting for StatefulSet statefulset-6548/ss2 to complete update May 11 14:04:48.520: INFO: Waiting for Pod statefulset-6548/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 11 14:04:58.620: INFO: Waiting for StatefulSet statefulset-6548/ss2 to complete update May 11 14:04:58.620: INFO: Waiting for Pod statefulset-6548/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 11 14:05:08.820: INFO: Deleting all statefulset in ns statefulset-6548 May 11 14:05:08.910: INFO: Scaling statefulset ss2 to 0 May 11 14:05:29.013: INFO: Waiting for statefulset status.replicas updated to 0 May 11 14:05:29.015: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:05:29.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6548" for this suite. May 11 14:05:41.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:05:41.350: INFO: namespace statefulset-6548 deletion completed in 12.294833409s • [SLOW TEST:188.940 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:05:41.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2f09a19a-4842-4cad-ae0c-10d572f52c3a STEP: Creating a pod to test consume secrets May 11 14:05:41.642: INFO: Waiting up to 5m0s for pod "pod-secrets-bc143264-db07-46cc-9916-c64df7683c5a" in namespace "secrets-4641" to be "success or failure" May 11 14:05:41.645: INFO: Pod "pod-secrets-bc143264-db07-46cc-9916-c64df7683c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.878562ms May 11 14:05:43.761: INFO: Pod "pod-secrets-bc143264-db07-46cc-9916-c64df7683c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119269856s May 11 14:05:46.064: INFO: Pod "pod-secrets-bc143264-db07-46cc-9916-c64df7683c5a": Phase="Running", Reason="", readiness=true. Elapsed: 4.422264241s May 11 14:05:48.068: INFO: Pod "pod-secrets-bc143264-db07-46cc-9916-c64df7683c5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.425482164s STEP: Saw pod success May 11 14:05:48.068: INFO: Pod "pod-secrets-bc143264-db07-46cc-9916-c64df7683c5a" satisfied condition "success or failure" May 11 14:05:48.070: INFO: Trying to get logs from node iruya-worker pod pod-secrets-bc143264-db07-46cc-9916-c64df7683c5a container secret-volume-test: STEP: delete the pod May 11 14:05:48.097: INFO: Waiting for pod pod-secrets-bc143264-db07-46cc-9916-c64df7683c5a to disappear May 11 14:05:48.186: INFO: Pod pod-secrets-bc143264-db07-46cc-9916-c64df7683c5a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:05:48.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4641" for this suite. May 11 14:05:54.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:05:54.406: INFO: namespace secrets-4641 deletion completed in 6.218152986s • [SLOW TEST:13.056 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:05:54.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 11 14:06:00.651: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 11 14:06:15.800: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:06:15.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5534" for this suite. May 11 14:06:22.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:06:22.250: INFO: namespace pods-5534 deletion completed in 6.443707828s • [SLOW TEST:27.844 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:06:22.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 14:06:22.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-538b9cfe-483f-4716-98e8-94034f1a88cf" in namespace "projected-1313" to be "success or failure" May 11 14:06:22.480: INFO: Pod "downwardapi-volume-538b9cfe-483f-4716-98e8-94034f1a88cf": Phase="Pending", Reason="", readiness=false. Elapsed: 57.292974ms May 11 14:06:24.483: INFO: Pod "downwardapi-volume-538b9cfe-483f-4716-98e8-94034f1a88cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060507139s May 11 14:06:26.487: INFO: Pod "downwardapi-volume-538b9cfe-483f-4716-98e8-94034f1a88cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064131931s May 11 14:06:28.490: INFO: Pod "downwardapi-volume-538b9cfe-483f-4716-98e8-94034f1a88cf": Phase="Running", Reason="", readiness=true. Elapsed: 6.067569167s May 11 14:06:30.588: INFO: Pod "downwardapi-volume-538b9cfe-483f-4716-98e8-94034f1a88cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.165498537s STEP: Saw pod success May 11 14:06:30.588: INFO: Pod "downwardapi-volume-538b9cfe-483f-4716-98e8-94034f1a88cf" satisfied condition "success or failure" May 11 14:06:30.592: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-538b9cfe-483f-4716-98e8-94034f1a88cf container client-container: STEP: delete the pod May 11 14:06:30.794: INFO: Waiting for pod downwardapi-volume-538b9cfe-483f-4716-98e8-94034f1a88cf to disappear May 11 14:06:30.821: INFO: Pod downwardapi-volume-538b9cfe-483f-4716-98e8-94034f1a88cf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:06:30.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1313" for this suite. May 11 14:06:36.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:06:37.033: INFO: namespace projected-1313 deletion completed in 6.208572214s • [SLOW TEST:14.782 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:06:37.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 14:06:37.487: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:06:37.534: INFO: Number of nodes with available pods: 0 May 11 14:06:37.534: INFO: Node iruya-worker is running more than one daemon pod May 11 14:06:38.559: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:06:38.561: INFO: Number of nodes with available pods: 0 May 11 14:06:38.561: INFO: Node iruya-worker is running more than one daemon pod May 11 14:06:39.846: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:06:39.848: INFO: Number of nodes with available pods: 0 May 11 14:06:39.848: INFO: Node iruya-worker is running more than one daemon pod May 11 14:06:40.649: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:06:40.966: INFO: Number of nodes with available pods: 0 May 11 14:06:40.966: INFO: Node iruya-worker is running more than one daemon pod May 11 14:06:41.540: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:06:41.543: INFO: Number of nodes with available pods: 0 May 11 14:06:41.543: INFO: Node iruya-worker is running more than one daemon pod May 11 14:06:42.538: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:06:42.542: INFO: Number of nodes with available pods: 0 May 11 14:06:42.542: INFO: Node iruya-worker is running more than one daemon pod May 11 14:06:43.539: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:06:43.542: INFO: Number of nodes with available pods: 0 May 11 14:06:43.542: INFO: Node iruya-worker is running more than one daemon pod May 11 14:06:44.636: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:06:44.638: INFO: Number of nodes with available pods: 1 May 11 14:06:44.638: INFO: Node iruya-worker is running more than one daemon pod May 11 14:06:45.539: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:06:45.541: INFO: Number of nodes with available pods: 2 May 11 14:06:45.541: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 11 14:06:45.590: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:06:45.666: INFO: Number of nodes with available pods: 2 May 11 14:06:45.666: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6058, will wait for the garbage collector to delete the pods May 11 14:06:46.757: INFO: Deleting DaemonSet.extensions daemon-set took: 6.609657ms May 11 14:06:47.757: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.000202262s May 11 14:07:03.019: INFO: Number of nodes with available pods: 0 May 11 14:07:03.020: INFO: Number of running nodes: 0, number of available pods: 0 May 11 14:07:03.080: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6058/daemonsets","resourceVersion":"10259206"},"items":null} May 11 14:07:03.082: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6058/pods","resourceVersion":"10259206"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:07:03.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6058" for this suite. May 11 14:07:11.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:07:11.362: INFO: namespace daemonsets-6058 deletion completed in 8.145898563s • [SLOW TEST:34.329 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:07:11.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 11 14:07:11.577: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:07:32.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4815" for this suite. May 11 14:07:38.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:07:38.542: INFO: namespace pods-4815 deletion completed in 6.254964177s • [SLOW TEST:27.179 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:07:38.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-57nd STEP: Creating a pod to test atomic-volume-subpath May 11 14:07:38.998: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-57nd" in namespace "subpath-9359" to be "success or failure" May 11 14:07:39.022: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Pending", Reason="", readiness=false. Elapsed: 23.190391ms May 11 14:07:41.026: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027795447s May 11 14:07:43.030: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031195389s May 11 14:07:45.034: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035234439s May 11 14:07:47.037: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Running", Reason="", readiness=true. Elapsed: 8.038812086s May 11 14:07:49.062: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Running", Reason="", readiness=true. Elapsed: 10.063814545s May 11 14:07:51.066: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Running", Reason="", readiness=true. Elapsed: 12.06737181s May 11 14:07:53.069: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Running", Reason="", readiness=true. Elapsed: 14.070833259s May 11 14:07:55.074: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Running", Reason="", readiness=true. Elapsed: 16.075371473s May 11 14:07:57.077: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Running", Reason="", readiness=true. Elapsed: 18.078669277s May 11 14:07:59.100: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Running", Reason="", readiness=true. Elapsed: 20.101231746s May 11 14:08:01.140: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Running", Reason="", readiness=true. Elapsed: 22.141523036s May 11 14:08:03.207: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Running", Reason="", readiness=true. Elapsed: 24.208129713s May 11 14:08:05.210: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Running", Reason="", readiness=true. Elapsed: 26.211287733s May 11 14:08:07.583: INFO: Pod "pod-subpath-test-configmap-57nd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.584916443s STEP: Saw pod success May 11 14:08:07.583: INFO: Pod "pod-subpath-test-configmap-57nd" satisfied condition "success or failure" May 11 14:08:07.615: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-57nd container test-container-subpath-configmap-57nd: STEP: delete the pod May 11 14:08:07.927: INFO: Waiting for pod pod-subpath-test-configmap-57nd to disappear May 11 14:08:08.044: INFO: Pod pod-subpath-test-configmap-57nd no longer exists STEP: Deleting pod pod-subpath-test-configmap-57nd May 11 14:08:08.045: INFO: Deleting pod "pod-subpath-test-configmap-57nd" in namespace "subpath-9359" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:08:08.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9359" for this suite. May 11 14:08:14.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:08:15.080: INFO: namespace subpath-9359 deletion completed in 7.029954497s • [SLOW TEST:36.538 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:08:15.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:08:24.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7619" for this suite. May 11 14:08:30.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:08:30.355: INFO: namespace watch-7619 deletion completed in 6.117184946s • [SLOW TEST:15.274 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:08:30.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 14:08:30.693: INFO: Creating ReplicaSet my-hostname-basic-7fda2509-163b-43f3-b1e6-bc6072105351 May 11 14:08:30.789: INFO: Pod name my-hostname-basic-7fda2509-163b-43f3-b1e6-bc6072105351: Found 0 pods out of 1 May 11 14:08:35.792: INFO: Pod name my-hostname-basic-7fda2509-163b-43f3-b1e6-bc6072105351: Found 1 pods out of 1 May 11 14:08:35.792: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7fda2509-163b-43f3-b1e6-bc6072105351" is running May 11 14:08:35.794: INFO: Pod "my-hostname-basic-7fda2509-163b-43f3-b1e6-bc6072105351-kjm6t" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 14:08:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 14:08:35 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 14:08:35 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 14:08:30 +0000 UTC Reason: Message:}]) May 11 14:08:35.795: INFO: Trying to dial the pod May 11 14:08:40.803: INFO: Controller my-hostname-basic-7fda2509-163b-43f3-b1e6-bc6072105351: Got expected result from replica 1 [my-hostname-basic-7fda2509-163b-43f3-b1e6-bc6072105351-kjm6t]: "my-hostname-basic-7fda2509-163b-43f3-b1e6-bc6072105351-kjm6t", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:08:40.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5490" for this suite. May 11 14:08:46.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:08:46.906: INFO: namespace replicaset-5490 deletion completed in 6.099475357s • [SLOW TEST:16.550 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:08:46.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 14:08:47.030: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02f090d5-943d-47e3-8077-983213a5acdf" in namespace "downward-api-7056" to be "success or failure" May 11 14:08:47.089: INFO: Pod "downwardapi-volume-02f090d5-943d-47e3-8077-983213a5acdf": Phase="Pending", Reason="", readiness=false. Elapsed: 59.725334ms May 11 14:08:49.285: INFO: Pod "downwardapi-volume-02f090d5-943d-47e3-8077-983213a5acdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255186303s May 11 14:08:51.381: INFO: Pod "downwardapi-volume-02f090d5-943d-47e3-8077-983213a5acdf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351345312s May 11 14:08:53.561: INFO: Pod "downwardapi-volume-02f090d5-943d-47e3-8077-983213a5acdf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.531521062s May 11 14:08:55.564: INFO: Pod "downwardapi-volume-02f090d5-943d-47e3-8077-983213a5acdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.534817104s STEP: Saw pod success May 11 14:08:55.565: INFO: Pod "downwardapi-volume-02f090d5-943d-47e3-8077-983213a5acdf" satisfied condition "success or failure" May 11 14:08:55.567: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-02f090d5-943d-47e3-8077-983213a5acdf container client-container: STEP: delete the pod May 11 14:08:55.676: INFO: Waiting for pod downwardapi-volume-02f090d5-943d-47e3-8077-983213a5acdf to disappear May 11 14:08:55.705: INFO: Pod downwardapi-volume-02f090d5-943d-47e3-8077-983213a5acdf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:08:55.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7056" for this suite. May 11 14:09:01.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:09:01.892: INFO: namespace downward-api-7056 deletion completed in 6.18449968s • [SLOW TEST:14.986 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:09:01.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 14:09:02.061: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1609bbdd-9a51-45aa-a1a3-ad7ae52479be" in namespace "downward-api-492" to be "success or failure" May 11 14:09:02.190: INFO: Pod "downwardapi-volume-1609bbdd-9a51-45aa-a1a3-ad7ae52479be": Phase="Pending", Reason="", readiness=false. Elapsed: 129.681416ms May 11 14:09:04.194: INFO: Pod "downwardapi-volume-1609bbdd-9a51-45aa-a1a3-ad7ae52479be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133370337s May 11 14:09:06.198: INFO: Pod "downwardapi-volume-1609bbdd-9a51-45aa-a1a3-ad7ae52479be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137000857s May 11 14:09:08.263: INFO: Pod "downwardapi-volume-1609bbdd-9a51-45aa-a1a3-ad7ae52479be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.20174608s STEP: Saw pod success May 11 14:09:08.263: INFO: Pod "downwardapi-volume-1609bbdd-9a51-45aa-a1a3-ad7ae52479be" satisfied condition "success or failure" May 11 14:09:08.327: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1609bbdd-9a51-45aa-a1a3-ad7ae52479be container client-container: STEP: delete the pod May 11 14:09:08.799: INFO: Waiting for pod downwardapi-volume-1609bbdd-9a51-45aa-a1a3-ad7ae52479be to disappear May 11 14:09:08.926: INFO: Pod downwardapi-volume-1609bbdd-9a51-45aa-a1a3-ad7ae52479be no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:09:08.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-492" for this suite. May 11 14:09:14.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:09:15.059: INFO: namespace downward-api-492 deletion completed in 6.129848844s • [SLOW TEST:13.167 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:09:15.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 11 14:09:15.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-979' May 11 14:09:15.571: INFO: stderr: "" May 11 14:09:15.571: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 14:09:15.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-979' May 11 14:09:15.728: INFO: stderr: "" May 11 14:09:15.728: INFO: stdout: "update-demo-nautilus-26544 update-demo-nautilus-qtxn2 " May 11 14:09:15.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-26544 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-979' May 11 14:09:15.891: INFO: stderr: "" May 11 14:09:15.891: INFO: stdout: "" May 11 14:09:15.891: INFO: update-demo-nautilus-26544 is created but not running May 11 14:09:20.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-979' May 11 14:09:20.983: INFO: stderr: "" May 11 14:09:20.983: INFO: stdout: "update-demo-nautilus-26544 update-demo-nautilus-qtxn2 " May 11 14:09:20.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-26544 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-979' May 11 14:09:21.071: INFO: stderr: "" May 11 14:09:21.071: INFO: stdout: "" May 11 14:09:21.071: INFO: update-demo-nautilus-26544 is created but not running May 11 14:09:26.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-979' May 11 14:09:26.160: INFO: stderr: "" May 11 14:09:26.160: INFO: stdout: "update-demo-nautilus-26544 update-demo-nautilus-qtxn2 " May 11 14:09:26.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-26544 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-979' May 11 14:09:26.385: INFO: stderr: "" May 11 14:09:26.385: INFO: stdout: "true" May 11 14:09:26.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-26544 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-979' May 11 14:09:26.479: INFO: stderr: "" May 11 14:09:26.479: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 14:09:26.479: INFO: validating pod update-demo-nautilus-26544 May 11 14:09:26.482: INFO: got data: { "image": "nautilus.jpg" } May 11 14:09:26.482: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 14:09:26.482: INFO: update-demo-nautilus-26544 is verified up and running May 11 14:09:26.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtxn2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-979' May 11 14:09:26.699: INFO: stderr: "" May 11 14:09:26.699: INFO: stdout: "true" May 11 14:09:26.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtxn2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-979' May 11 14:09:26.787: INFO: stderr: "" May 11 14:09:26.788: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 14:09:26.788: INFO: validating pod update-demo-nautilus-qtxn2 May 11 14:09:26.791: INFO: got data: { "image": "nautilus.jpg" } May 11 14:09:26.791: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 14:09:26.791: INFO: update-demo-nautilus-qtxn2 is verified up and running STEP: rolling-update to new replication controller May 11 14:09:26.793: INFO: scanned /root for discovery docs: May 11 14:09:26.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-979' May 11 14:09:54.793: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 11 14:09:54.793: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 14:09:54.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-979' May 11 14:09:55.033: INFO: stderr: "" May 11 14:09:55.033: INFO: stdout: "update-demo-kitten-gxmtv update-demo-kitten-tjntt " May 11 14:09:55.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gxmtv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-979' May 11 14:09:55.112: INFO: stderr: "" May 11 14:09:55.113: INFO: stdout: "true" May 11 14:09:55.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gxmtv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-979' May 11 14:09:55.244: INFO: stderr: "" May 11 14:09:55.244: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 11 14:09:55.244: INFO: validating pod update-demo-kitten-gxmtv May 11 14:09:55.248: INFO: got data: { "image": "kitten.jpg" } May 11 14:09:55.248: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 11 14:09:55.248: INFO: update-demo-kitten-gxmtv is verified up and running May 11 14:09:55.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tjntt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-979' May 11 14:09:55.330: INFO: stderr: "" May 11 14:09:55.330: INFO: stdout: "true" May 11 14:09:55.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tjntt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-979' May 11 14:09:55.414: INFO: stderr: "" May 11 14:09:55.414: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 11 14:09:55.414: INFO: validating pod update-demo-kitten-tjntt May 11 14:09:55.418: INFO: got data: { "image": "kitten.jpg" } May 11 14:09:55.418: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 11 14:09:55.418: INFO: update-demo-kitten-tjntt is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:09:55.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-979" for this suite. May 11 14:10:19.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:10:19.744: INFO: namespace kubectl-979 deletion completed in 24.323109754s • [SLOW TEST:64.684 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:10:19.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 11 14:10:28.783: INFO: 9 pods remaining May 11 14:10:28.783: INFO: 6 pods has nil DeletionTimestamp May 11 14:10:28.783: INFO: May 11 14:10:29.862: INFO: 5 pods remaining May 11 14:10:29.862: INFO: 0 pods has nil DeletionTimestamp May 11 14:10:29.862: INFO: STEP: Gathering metrics W0511 14:10:31.046495 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 14:10:31.046: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:10:31.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-754" for this suite. May 11 14:10:39.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:10:39.277: INFO: namespace gc-754 deletion completed in 8.229115373s • [SLOW TEST:19.534 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:10:39.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 14:10:39.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7420' May 11 14:10:39.743: INFO: stderr: "" May 11 14:10:39.743: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 11 14:10:39.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7420' May 11 14:10:45.049: INFO: stderr: "" May 11 14:10:45.049: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:10:45.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7420" for this suite. May 11 14:10:53.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:10:53.624: INFO: namespace kubectl-7420 deletion completed in 8.317786135s • [SLOW TEST:14.347 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:10:53.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-01b29d01-62ac-4785-8642-747ac6f1493f STEP: Creating a pod to test consume secrets May 11 14:10:54.293: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fb274058-76a2-473d-b637-0ce6c72ecd1b" in namespace "projected-2949" to be "success or failure" May 11 14:10:54.320: INFO: Pod "pod-projected-secrets-fb274058-76a2-473d-b637-0ce6c72ecd1b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.901901ms May 11 14:10:56.324: INFO: Pod "pod-projected-secrets-fb274058-76a2-473d-b637-0ce6c72ecd1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031535145s May 11 14:10:58.328: INFO: Pod "pod-projected-secrets-fb274058-76a2-473d-b637-0ce6c72ecd1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035025854s May 11 14:11:00.332: INFO: Pod "pod-projected-secrets-fb274058-76a2-473d-b637-0ce6c72ecd1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038941968s STEP: Saw pod success May 11 14:11:00.332: INFO: Pod "pod-projected-secrets-fb274058-76a2-473d-b637-0ce6c72ecd1b" satisfied condition "success or failure" May 11 14:11:00.334: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-fb274058-76a2-473d-b637-0ce6c72ecd1b container secret-volume-test: STEP: delete the pod May 11 14:11:00.435: INFO: Waiting for pod pod-projected-secrets-fb274058-76a2-473d-b637-0ce6c72ecd1b to disappear May 11 14:11:00.532: INFO: Pod pod-projected-secrets-fb274058-76a2-473d-b637-0ce6c72ecd1b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:11:00.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2949" for this suite. May 11 14:11:06.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:11:06.724: INFO: namespace projected-2949 deletion completed in 6.189903717s • [SLOW TEST:13.099 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:11:06.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 14:11:06.984: INFO: Waiting up to 5m0s for pod "downwardapi-volume-672422d0-78e8-406f-8de9-91eb3c68ffdf" in namespace "projected-5663" to be "success or failure" May 11 14:11:07.008: INFO: Pod "downwardapi-volume-672422d0-78e8-406f-8de9-91eb3c68ffdf": Phase="Pending", Reason="", readiness=false. Elapsed: 24.477021ms May 11 14:11:09.135: INFO: Pod "downwardapi-volume-672422d0-78e8-406f-8de9-91eb3c68ffdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15102077s May 11 14:11:11.139: INFO: Pod "downwardapi-volume-672422d0-78e8-406f-8de9-91eb3c68ffdf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155434681s May 11 14:11:13.142: INFO: Pod "downwardapi-volume-672422d0-78e8-406f-8de9-91eb3c68ffdf": Phase="Running", Reason="", readiness=true. Elapsed: 6.158240318s May 11 14:11:15.145: INFO: Pod "downwardapi-volume-672422d0-78e8-406f-8de9-91eb3c68ffdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.161335373s STEP: Saw pod success May 11 14:11:15.145: INFO: Pod "downwardapi-volume-672422d0-78e8-406f-8de9-91eb3c68ffdf" satisfied condition "success or failure" May 11 14:11:15.148: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-672422d0-78e8-406f-8de9-91eb3c68ffdf container client-container: STEP: delete the pod May 11 14:11:15.190: INFO: Waiting for pod downwardapi-volume-672422d0-78e8-406f-8de9-91eb3c68ffdf to disappear May 11 14:11:15.340: INFO: Pod downwardapi-volume-672422d0-78e8-406f-8de9-91eb3c68ffdf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:11:15.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5663" for this suite. May 11 14:11:21.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:11:21.477: INFO: namespace projected-5663 deletion completed in 6.133609438s • [SLOW TEST:14.753 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:11:21.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-30c1e593-0c79-48a4-92c6-04bc515ee15d STEP: Creating a pod to test consume secrets May 11 14:11:21.889: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0a268c3-c8cd-4f24-bd70-1e118dde3129" in namespace "projected-2547" to be "success or failure" May 11 14:11:21.958: INFO: Pod "pod-projected-secrets-a0a268c3-c8cd-4f24-bd70-1e118dde3129": Phase="Pending", Reason="", readiness=false. Elapsed: 68.449634ms May 11 14:11:23.962: INFO: Pod "pod-projected-secrets-a0a268c3-c8cd-4f24-bd70-1e118dde3129": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072477307s May 11 14:11:25.966: INFO: Pod "pod-projected-secrets-a0a268c3-c8cd-4f24-bd70-1e118dde3129": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076209063s May 11 14:11:27.968: INFO: Pod "pod-projected-secrets-a0a268c3-c8cd-4f24-bd70-1e118dde3129": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079123822s May 11 14:11:29.972: INFO: Pod "pod-projected-secrets-a0a268c3-c8cd-4f24-bd70-1e118dde3129": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082949194s STEP: Saw pod success May 11 14:11:29.972: INFO: Pod "pod-projected-secrets-a0a268c3-c8cd-4f24-bd70-1e118dde3129" satisfied condition "success or failure" May 11 14:11:29.975: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-a0a268c3-c8cd-4f24-bd70-1e118dde3129 container projected-secret-volume-test: STEP: delete the pod May 11 14:11:30.122: INFO: Waiting for pod pod-projected-secrets-a0a268c3-c8cd-4f24-bd70-1e118dde3129 to disappear May 11 14:11:30.188: INFO: Pod pod-projected-secrets-a0a268c3-c8cd-4f24-bd70-1e118dde3129 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:11:30.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2547" for this suite. May 11 14:11:36.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:11:36.498: INFO: namespace projected-2547 deletion completed in 6.305020755s • [SLOW TEST:15.020 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:11:36.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-075d2d52-fc4a-4101-b608-29988d9e5371 STEP: Creating a pod to test consume secrets May 11 14:11:36.958: INFO: Waiting up to 5m0s for pod "pod-secrets-f60c9de8-e0e1-4f42-a6d0-37ea758b44a7" in namespace "secrets-9049" to be "success or failure" May 11 14:11:36.973: INFO: Pod "pod-secrets-f60c9de8-e0e1-4f42-a6d0-37ea758b44a7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.624017ms May 11 14:11:38.977: INFO: Pod "pod-secrets-f60c9de8-e0e1-4f42-a6d0-37ea758b44a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01964638s May 11 14:11:40.982: INFO: Pod "pod-secrets-f60c9de8-e0e1-4f42-a6d0-37ea758b44a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024152669s May 11 14:11:42.985: INFO: Pod "pod-secrets-f60c9de8-e0e1-4f42-a6d0-37ea758b44a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026798986s STEP: Saw pod success May 11 14:11:42.985: INFO: Pod "pod-secrets-f60c9de8-e0e1-4f42-a6d0-37ea758b44a7" satisfied condition "success or failure" May 11 14:11:43.077: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-f60c9de8-e0e1-4f42-a6d0-37ea758b44a7 container secret-volume-test: STEP: delete the pod May 11 14:11:43.558: INFO: Waiting for pod pod-secrets-f60c9de8-e0e1-4f42-a6d0-37ea758b44a7 to disappear May 11 14:11:43.658: INFO: Pod pod-secrets-f60c9de8-e0e1-4f42-a6d0-37ea758b44a7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:11:43.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9049" for this suite. May 11 14:11:49.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:11:49.793: INFO: namespace secrets-9049 deletion completed in 6.131790249s STEP: Destroying namespace "secret-namespace-1835" for this suite. May 11 14:11:55.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:11:55.903: INFO: namespace secret-namespace-1835 deletion completed in 6.109488464s • [SLOW TEST:19.405 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:11:55.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-0ea5b739-5f71-44c3-a6b2-b424da049e6e STEP: Creating a pod to test consume configMaps May 11 14:11:56.353: INFO: Waiting up to 5m0s for pod "pod-configmaps-c9f8c864-80cd-4b44-850e-dc31cf4be81c" in namespace "configmap-2174" to be "success or failure" May 11 14:11:56.539: INFO: Pod "pod-configmaps-c9f8c864-80cd-4b44-850e-dc31cf4be81c": Phase="Pending", Reason="", readiness=false. Elapsed: 186.273832ms May 11 14:11:58.542: INFO: Pod "pod-configmaps-c9f8c864-80cd-4b44-850e-dc31cf4be81c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189727616s May 11 14:12:00.547: INFO: Pod "pod-configmaps-c9f8c864-80cd-4b44-850e-dc31cf4be81c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194114614s May 11 14:12:02.731: INFO: Pod "pod-configmaps-c9f8c864-80cd-4b44-850e-dc31cf4be81c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.378095393s May 11 14:12:04.735: INFO: Pod "pod-configmaps-c9f8c864-80cd-4b44-850e-dc31cf4be81c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.382399163s STEP: Saw pod success May 11 14:12:04.735: INFO: Pod "pod-configmaps-c9f8c864-80cd-4b44-850e-dc31cf4be81c" satisfied condition "success or failure" May 11 14:12:04.738: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-c9f8c864-80cd-4b44-850e-dc31cf4be81c container configmap-volume-test: STEP: delete the pod May 11 14:12:04.948: INFO: Waiting for pod pod-configmaps-c9f8c864-80cd-4b44-850e-dc31cf4be81c to disappear May 11 14:12:04.979: INFO: Pod pod-configmaps-c9f8c864-80cd-4b44-850e-dc31cf4be81c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:12:04.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2174" for this suite. May 11 14:12:11.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:12:11.083: INFO: namespace configmap-2174 deletion completed in 6.100439564s • [SLOW TEST:15.180 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:12:11.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-92a4587b-50a2-426f-86d0-daae49b4c5a8 STEP: Creating a pod to test consume configMaps May 11 14:12:11.328: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5f752706-10e7-43ac-86c4-6afdd121971f" in namespace "projected-3887" to be "success or failure" May 11 14:12:11.391: INFO: Pod "pod-projected-configmaps-5f752706-10e7-43ac-86c4-6afdd121971f": Phase="Pending", Reason="", readiness=false. Elapsed: 62.543585ms May 11 14:12:13.407: INFO: Pod "pod-projected-configmaps-5f752706-10e7-43ac-86c4-6afdd121971f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078799921s May 11 14:12:15.411: INFO: Pod "pod-projected-configmaps-5f752706-10e7-43ac-86c4-6afdd121971f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082807468s May 11 14:12:17.414: INFO: Pod "pod-projected-configmaps-5f752706-10e7-43ac-86c4-6afdd121971f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086076352s STEP: Saw pod success May 11 14:12:17.415: INFO: Pod "pod-projected-configmaps-5f752706-10e7-43ac-86c4-6afdd121971f" satisfied condition "success or failure" May 11 14:12:17.417: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-5f752706-10e7-43ac-86c4-6afdd121971f container projected-configmap-volume-test: STEP: delete the pod May 11 14:12:17.510: INFO: Waiting for pod pod-projected-configmaps-5f752706-10e7-43ac-86c4-6afdd121971f to disappear May 11 14:12:17.683: INFO: Pod pod-projected-configmaps-5f752706-10e7-43ac-86c4-6afdd121971f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:12:17.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3887" for this suite. May 11 14:12:23.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:12:23.918: INFO: namespace projected-3887 deletion completed in 6.188495946s • [SLOW TEST:12.835 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:12:23.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 14:12:24.115: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:12:30.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4033" for this suite. May 11 14:13:14.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:13:14.401: INFO: namespace pods-4033 deletion completed in 44.087021639s • [SLOW TEST:50.483 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:13:14.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-65h7 STEP: Creating a pod to test atomic-volume-subpath May 11 14:13:14.883: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-65h7" in namespace "subpath-1808" to be "success or failure" May 11 14:13:14.927: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Pending", Reason="", readiness=false. Elapsed: 43.349851ms May 11 14:13:16.931: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047490435s May 11 14:13:18.995: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111530971s May 11 14:13:21.474: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.590783486s May 11 14:13:23.477: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Running", Reason="", readiness=true. Elapsed: 8.593634381s May 11 14:13:25.481: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Running", Reason="", readiness=true. Elapsed: 10.597263918s May 11 14:13:27.485: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Running", Reason="", readiness=true. Elapsed: 12.601032525s May 11 14:13:29.488: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Running", Reason="", readiness=true. Elapsed: 14.604051656s May 11 14:13:32.876: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Running", Reason="", readiness=true. Elapsed: 17.992866197s May 11 14:13:34.880: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Running", Reason="", readiness=true. Elapsed: 19.996417412s May 11 14:13:36.885: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Running", Reason="", readiness=true. Elapsed: 22.001539539s May 11 14:13:38.889: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Running", Reason="", readiness=true. Elapsed: 24.005390932s May 11 14:13:40.894: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Running", Reason="", readiness=true. Elapsed: 26.010066033s May 11 14:13:42.897: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Running", Reason="", readiness=true. Elapsed: 28.013832826s May 11 14:13:44.901: INFO: Pod "pod-subpath-test-projected-65h7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.01792626s STEP: Saw pod success May 11 14:13:44.902: INFO: Pod "pod-subpath-test-projected-65h7" satisfied condition "success or failure" May 11 14:13:44.905: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-65h7 container test-container-subpath-projected-65h7: STEP: delete the pod May 11 14:13:44.973: INFO: Waiting for pod pod-subpath-test-projected-65h7 to disappear May 11 14:13:45.023: INFO: Pod pod-subpath-test-projected-65h7 no longer exists STEP: Deleting pod pod-subpath-test-projected-65h7 May 11 14:13:45.023: INFO: Deleting pod "pod-subpath-test-projected-65h7" in namespace "subpath-1808" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:13:45.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1808" for this suite. May 11 14:13:51.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:13:51.230: INFO: namespace subpath-1808 deletion completed in 6.176207044s • [SLOW TEST:36.830 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:13:51.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 11 14:13:51.372: INFO: Waiting up to 5m0s for pod "downward-api-6d03ad69-8d2e-4112-8213-4e2c6a0e67d4" in namespace "downward-api-7923" to be "success or failure" May 11 14:13:51.388: INFO: Pod "downward-api-6d03ad69-8d2e-4112-8213-4e2c6a0e67d4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.101666ms May 11 14:13:53.391: INFO: Pod "downward-api-6d03ad69-8d2e-4112-8213-4e2c6a0e67d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019436358s May 11 14:13:55.395: INFO: Pod "downward-api-6d03ad69-8d2e-4112-8213-4e2c6a0e67d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023901263s May 11 14:13:57.582: INFO: Pod "downward-api-6d03ad69-8d2e-4112-8213-4e2c6a0e67d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210796633s May 11 14:13:59.587: INFO: Pod "downward-api-6d03ad69-8d2e-4112-8213-4e2c6a0e67d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214953069s May 11 14:14:01.591: INFO: Pod "downward-api-6d03ad69-8d2e-4112-8213-4e2c6a0e67d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.21898647s STEP: Saw pod success May 11 14:14:01.591: INFO: Pod "downward-api-6d03ad69-8d2e-4112-8213-4e2c6a0e67d4" satisfied condition "success or failure" May 11 14:14:01.594: INFO: Trying to get logs from node iruya-worker pod downward-api-6d03ad69-8d2e-4112-8213-4e2c6a0e67d4 container dapi-container: STEP: delete the pod May 11 14:14:01.616: INFO: Waiting for pod downward-api-6d03ad69-8d2e-4112-8213-4e2c6a0e67d4 to disappear May 11 14:14:01.646: INFO: Pod downward-api-6d03ad69-8d2e-4112-8213-4e2c6a0e67d4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:14:01.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7923" for this suite. May 11 14:14:07.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:14:07.806: INFO: namespace downward-api-7923 deletion completed in 6.156773059s • [SLOW TEST:16.576 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:14:07.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 11 14:14:07.982: INFO: Waiting up to 5m0s for pod "pod-d5d125df-ce15-4410-a4ac-b2b294b2d506" in namespace "emptydir-7402" to be "success or failure" May 11 14:14:07.987: INFO: Pod "pod-d5d125df-ce15-4410-a4ac-b2b294b2d506": Phase="Pending", Reason="", readiness=false. Elapsed: 5.140731ms May 11 14:14:10.223: INFO: Pod "pod-d5d125df-ce15-4410-a4ac-b2b294b2d506": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240550762s May 11 14:14:12.233: INFO: Pod "pod-d5d125df-ce15-4410-a4ac-b2b294b2d506": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.251327896s STEP: Saw pod success May 11 14:14:12.233: INFO: Pod "pod-d5d125df-ce15-4410-a4ac-b2b294b2d506" satisfied condition "success or failure" May 11 14:14:12.236: INFO: Trying to get logs from node iruya-worker2 pod pod-d5d125df-ce15-4410-a4ac-b2b294b2d506 container test-container: STEP: delete the pod May 11 14:14:12.290: INFO: Waiting for pod pod-d5d125df-ce15-4410-a4ac-b2b294b2d506 to disappear May 11 14:14:12.299: INFO: Pod pod-d5d125df-ce15-4410-a4ac-b2b294b2d506 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:14:12.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7402" for this suite. May 11 14:14:18.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:14:18.401: INFO: namespace emptydir-7402 deletion completed in 6.099766844s • [SLOW TEST:10.594 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:14:18.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 11 14:14:18.500: INFO: Waiting up to 5m0s for pod "var-expansion-5706032b-cd30-477d-b0b5-b9372e93c17d" in namespace "var-expansion-7076" to be "success or failure" May 11 14:14:18.504: INFO: Pod "var-expansion-5706032b-cd30-477d-b0b5-b9372e93c17d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.427844ms May 11 14:14:20.509: INFO: Pod "var-expansion-5706032b-cd30-477d-b0b5-b9372e93c17d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00833792s May 11 14:14:22.512: INFO: Pod "var-expansion-5706032b-cd30-477d-b0b5-b9372e93c17d": Phase="Running", Reason="", readiness=true. Elapsed: 4.01187502s May 11 14:14:24.520: INFO: Pod "var-expansion-5706032b-cd30-477d-b0b5-b9372e93c17d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01950014s STEP: Saw pod success May 11 14:14:24.520: INFO: Pod "var-expansion-5706032b-cd30-477d-b0b5-b9372e93c17d" satisfied condition "success or failure" May 11 14:14:24.522: INFO: Trying to get logs from node iruya-worker pod var-expansion-5706032b-cd30-477d-b0b5-b9372e93c17d container dapi-container: STEP: delete the pod May 11 14:14:24.553: INFO: Waiting for pod var-expansion-5706032b-cd30-477d-b0b5-b9372e93c17d to disappear May 11 14:14:24.562: INFO: Pod var-expansion-5706032b-cd30-477d-b0b5-b9372e93c17d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:14:24.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7076" for this suite. May 11 14:14:30.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:14:30.652: INFO: namespace var-expansion-7076 deletion completed in 6.087371364s • [SLOW TEST:12.251 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:14:30.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-63ab8e36-276d-4d90-8a5f-c0f9b80dd940 STEP: Creating a pod to test consume configMaps May 11 14:14:30.715: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-771090dd-b0b0-47fd-8eea-e8a0f87dbf85" in namespace "projected-1486" to be "success or failure" May 11 14:14:30.750: INFO: Pod "pod-projected-configmaps-771090dd-b0b0-47fd-8eea-e8a0f87dbf85": Phase="Pending", Reason="", readiness=false. Elapsed: 34.959941ms May 11 14:14:32.774: INFO: Pod "pod-projected-configmaps-771090dd-b0b0-47fd-8eea-e8a0f87dbf85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059004841s May 11 14:14:34.804: INFO: Pod "pod-projected-configmaps-771090dd-b0b0-47fd-8eea-e8a0f87dbf85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0881152s STEP: Saw pod success May 11 14:14:34.804: INFO: Pod "pod-projected-configmaps-771090dd-b0b0-47fd-8eea-e8a0f87dbf85" satisfied condition "success or failure" May 11 14:14:34.805: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-771090dd-b0b0-47fd-8eea-e8a0f87dbf85 container projected-configmap-volume-test: STEP: delete the pod May 11 14:14:34.895: INFO: Waiting for pod pod-projected-configmaps-771090dd-b0b0-47fd-8eea-e8a0f87dbf85 to disappear May 11 14:14:34.909: INFO: Pod pod-projected-configmaps-771090dd-b0b0-47fd-8eea-e8a0f87dbf85 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:14:34.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1486" for this suite. May 11 14:14:40.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:14:41.016: INFO: namespace projected-1486 deletion completed in 6.103844475s • [SLOW TEST:10.363 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:14:41.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 14:14:41.139: INFO: Waiting up to 5m0s for pod "pod-e916d16e-299d-44e6-943b-ec90b94e2840" in namespace "emptydir-1225" to be "success or failure" May 11 14:14:41.173: INFO: Pod "pod-e916d16e-299d-44e6-943b-ec90b94e2840": Phase="Pending", Reason="", readiness=false. Elapsed: 33.948103ms May 11 14:14:43.177: INFO: Pod "pod-e916d16e-299d-44e6-943b-ec90b94e2840": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037907038s May 11 14:14:45.181: INFO: Pod "pod-e916d16e-299d-44e6-943b-ec90b94e2840": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041619337s STEP: Saw pod success May 11 14:14:45.181: INFO: Pod "pod-e916d16e-299d-44e6-943b-ec90b94e2840" satisfied condition "success or failure" May 11 14:14:45.184: INFO: Trying to get logs from node iruya-worker pod pod-e916d16e-299d-44e6-943b-ec90b94e2840 container test-container: STEP: delete the pod May 11 14:14:45.412: INFO: Waiting for pod pod-e916d16e-299d-44e6-943b-ec90b94e2840 to disappear May 11 14:14:45.424: INFO: Pod pod-e916d16e-299d-44e6-943b-ec90b94e2840 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:14:45.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1225" for this suite. May 11 14:14:51.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:14:51.509: INFO: namespace emptydir-1225 deletion completed in 6.082457413s • [SLOW TEST:10.493 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:14:51.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 11 14:14:51.660: INFO: Waiting up to 5m0s for pod "pod-9d192ce4-c3db-46c1-ada4-17381c05ae2b" in namespace "emptydir-2254" to be "success or failure" May 11 14:14:51.696: INFO: Pod "pod-9d192ce4-c3db-46c1-ada4-17381c05ae2b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.353531ms May 11 14:14:54.337: INFO: Pod "pod-9d192ce4-c3db-46c1-ada4-17381c05ae2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.677087583s May 11 14:14:56.340: INFO: Pod "pod-9d192ce4-c3db-46c1-ada4-17381c05ae2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.680307794s STEP: Saw pod success May 11 14:14:56.340: INFO: Pod "pod-9d192ce4-c3db-46c1-ada4-17381c05ae2b" satisfied condition "success or failure" May 11 14:14:56.343: INFO: Trying to get logs from node iruya-worker2 pod pod-9d192ce4-c3db-46c1-ada4-17381c05ae2b container test-container: STEP: delete the pod May 11 14:14:56.412: INFO: Waiting for pod pod-9d192ce4-c3db-46c1-ada4-17381c05ae2b to disappear May 11 14:14:56.452: INFO: Pod pod-9d192ce4-c3db-46c1-ada4-17381c05ae2b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:14:56.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2254" for this suite. May 11 14:15:02.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:15:02.587: INFO: namespace emptydir-2254 deletion completed in 6.130266391s • [SLOW TEST:11.077 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:15:02.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3819 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 11 14:15:03.025: INFO: Found 0 stateful pods, waiting for 3 May 11 14:15:13.030: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 14:15:13.030: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 14:15:13.030: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 14:15:23.030: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 14:15:23.030: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 14:15:23.030: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 11 14:15:23.054: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 11 14:15:33.153: INFO: Updating stateful set ss2 May 11 14:15:33.251: INFO: Waiting for Pod statefulset-3819/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 11 14:15:45.866: INFO: Found 2 stateful pods, waiting for 3 May 11 14:15:55.901: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 14:15:55.901: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 14:15:55.901: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 11 14:15:55.922: INFO: Updating stateful set ss2 May 11 14:15:56.069: INFO: Waiting for Pod statefulset-3819/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 14:16:06.100: INFO: Updating stateful set ss2 May 11 14:16:06.182: INFO: Waiting for StatefulSet statefulset-3819/ss2 to complete update May 11 14:16:06.182: INFO: Waiting for Pod statefulset-3819/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 11 14:16:16.203: INFO: Waiting for StatefulSet statefulset-3819/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 11 14:16:26.195: INFO: Deleting all statefulset in ns statefulset-3819 May 11 14:16:26.196: INFO: Scaling statefulset ss2 to 0 May 11 14:16:56.333: INFO: Waiting for statefulset status.replicas updated to 0 May 11 14:16:56.335: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:16:56.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3819" for this suite. May 11 14:17:04.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:17:04.495: INFO: namespace statefulset-3819 deletion completed in 8.129466014s • [SLOW TEST:121.908 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:17:04.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 14:17:04.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10b1ba80-493a-44b6-bb0a-7cbf6926d4d9" in namespace "projected-9718" to be "success or failure" May 11 14:17:04.621: INFO: Pod "downwardapi-volume-10b1ba80-493a-44b6-bb0a-7cbf6926d4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 57.645454ms May 11 14:17:07.722: INFO: Pod "downwardapi-volume-10b1ba80-493a-44b6-bb0a-7cbf6926d4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.158657253s May 11 14:17:09.725: INFO: Pod "downwardapi-volume-10b1ba80-493a-44b6-bb0a-7cbf6926d4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.161297175s May 11 14:17:11.731: INFO: Pod "downwardapi-volume-10b1ba80-493a-44b6-bb0a-7cbf6926d4d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.167557637s STEP: Saw pod success May 11 14:17:11.731: INFO: Pod "downwardapi-volume-10b1ba80-493a-44b6-bb0a-7cbf6926d4d9" satisfied condition "success or failure" May 11 14:17:11.733: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-10b1ba80-493a-44b6-bb0a-7cbf6926d4d9 container client-container: STEP: delete the pod May 11 14:17:11.850: INFO: Waiting for pod downwardapi-volume-10b1ba80-493a-44b6-bb0a-7cbf6926d4d9 to disappear May 11 14:17:11.854: INFO: Pod downwardapi-volume-10b1ba80-493a-44b6-bb0a-7cbf6926d4d9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:17:11.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9718" for this suite. May 11 14:17:17.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:17:18.019: INFO: namespace projected-9718 deletion completed in 6.161621038s • [SLOW TEST:13.524 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:17:18.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 11 14:17:18.196: INFO: Waiting up to 5m0s for pod "client-containers-9bddb47e-f35c-4bcf-b58b-fd2d20e00aa6" in namespace "containers-5493" to be "success or failure" May 11 14:17:18.239: INFO: Pod "client-containers-9bddb47e-f35c-4bcf-b58b-fd2d20e00aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 42.731668ms May 11 14:17:20.285: INFO: Pod "client-containers-9bddb47e-f35c-4bcf-b58b-fd2d20e00aa6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089616455s May 11 14:17:22.289: INFO: Pod "client-containers-9bddb47e-f35c-4bcf-b58b-fd2d20e00aa6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093127129s STEP: Saw pod success May 11 14:17:22.289: INFO: Pod "client-containers-9bddb47e-f35c-4bcf-b58b-fd2d20e00aa6" satisfied condition "success or failure" May 11 14:17:22.292: INFO: Trying to get logs from node iruya-worker2 pod client-containers-9bddb47e-f35c-4bcf-b58b-fd2d20e00aa6 container test-container: STEP: delete the pod May 11 14:17:22.339: INFO: Waiting for pod client-containers-9bddb47e-f35c-4bcf-b58b-fd2d20e00aa6 to disappear May 11 14:17:22.368: INFO: Pod client-containers-9bddb47e-f35c-4bcf-b58b-fd2d20e00aa6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:17:22.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5493" for this suite. May 11 14:17:28.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:17:28.673: INFO: namespace containers-5493 deletion completed in 6.302222554s • [SLOW TEST:10.655 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:17:28.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 14:17:28.768: INFO: Waiting up to 5m0s for pod "pod-91fb06d0-a288-4490-8645-56d908b86079" in namespace "emptydir-5935" to be "success or failure" May 11 14:17:28.787: INFO: Pod "pod-91fb06d0-a288-4490-8645-56d908b86079": Phase="Pending", Reason="", readiness=false. Elapsed: 19.696226ms May 11 14:17:30.807: INFO: Pod "pod-91fb06d0-a288-4490-8645-56d908b86079": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038992812s May 11 14:17:32.811: INFO: Pod "pod-91fb06d0-a288-4490-8645-56d908b86079": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042994813s May 11 14:17:34.814: INFO: Pod "pod-91fb06d0-a288-4490-8645-56d908b86079": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046387635s STEP: Saw pod success May 11 14:17:34.814: INFO: Pod "pod-91fb06d0-a288-4490-8645-56d908b86079" satisfied condition "success or failure" May 11 14:17:34.816: INFO: Trying to get logs from node iruya-worker pod pod-91fb06d0-a288-4490-8645-56d908b86079 container test-container: STEP: delete the pod May 11 14:17:34.836: INFO: Waiting for pod pod-91fb06d0-a288-4490-8645-56d908b86079 to disappear May 11 14:17:34.864: INFO: Pod pod-91fb06d0-a288-4490-8645-56d908b86079 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:17:34.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5935" for this suite. May 11 14:17:40.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:17:40.983: INFO: namespace emptydir-5935 deletion completed in 6.116844282s • [SLOW TEST:12.309 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:17:40.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 11 14:17:41.242: INFO: Waiting up to 5m0s for pod "pod-eccca9f0-cb86-408a-a810-e93adc394285" in namespace "emptydir-6514" to be "success or failure" May 11 14:17:41.382: INFO: Pod "pod-eccca9f0-cb86-408a-a810-e93adc394285": Phase="Pending", Reason="", readiness=false. Elapsed: 139.919567ms May 11 14:17:43.386: INFO: Pod "pod-eccca9f0-cb86-408a-a810-e93adc394285": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144053933s May 11 14:17:45.390: INFO: Pod "pod-eccca9f0-cb86-408a-a810-e93adc394285": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147824788s May 11 14:17:47.392: INFO: Pod "pod-eccca9f0-cb86-408a-a810-e93adc394285": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.150377266s STEP: Saw pod success May 11 14:17:47.392: INFO: Pod "pod-eccca9f0-cb86-408a-a810-e93adc394285" satisfied condition "success or failure" May 11 14:17:47.394: INFO: Trying to get logs from node iruya-worker2 pod pod-eccca9f0-cb86-408a-a810-e93adc394285 container test-container: STEP: delete the pod May 11 14:17:47.514: INFO: Waiting for pod pod-eccca9f0-cb86-408a-a810-e93adc394285 to disappear May 11 14:17:47.556: INFO: Pod pod-eccca9f0-cb86-408a-a810-e93adc394285 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:17:47.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6514" for this suite. May 11 14:17:53.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:17:53.682: INFO: namespace emptydir-6514 deletion completed in 6.12287835s • [SLOW TEST:12.699 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:17:53.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 14:17:53.757: INFO: Waiting up to 5m0s for pod "pod-6a375667-f157-4d05-8aab-370fd97ac3af" in namespace "emptydir-6876" to be "success or failure" May 11 14:17:53.795: INFO: Pod "pod-6a375667-f157-4d05-8aab-370fd97ac3af": Phase="Pending", Reason="", readiness=false. Elapsed: 37.222332ms May 11 14:17:55.799: INFO: Pod "pod-6a375667-f157-4d05-8aab-370fd97ac3af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041130908s May 11 14:17:57.802: INFO: Pod "pod-6a375667-f157-4d05-8aab-370fd97ac3af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044302941s STEP: Saw pod success May 11 14:17:57.802: INFO: Pod "pod-6a375667-f157-4d05-8aab-370fd97ac3af" satisfied condition "success or failure" May 11 14:17:57.804: INFO: Trying to get logs from node iruya-worker pod pod-6a375667-f157-4d05-8aab-370fd97ac3af container test-container: STEP: delete the pod May 11 14:17:57.820: INFO: Waiting for pod pod-6a375667-f157-4d05-8aab-370fd97ac3af to disappear May 11 14:17:57.825: INFO: Pod pod-6a375667-f157-4d05-8aab-370fd97ac3af no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:17:57.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6876" for this suite. May 11 14:18:03.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:18:03.915: INFO: namespace emptydir-6876 deletion completed in 6.086757986s • [SLOW TEST:10.232 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:18:03.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 11 14:18:04.050: INFO: namespace kubectl-7283 May 11 14:18:04.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7283' May 11 14:18:07.315: INFO: stderr: "" May 11 14:18:07.315: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 11 14:18:08.319: INFO: Selector matched 1 pods for map[app:redis] May 11 14:18:08.319: INFO: Found 0 / 1 May 11 14:18:09.496: INFO: Selector matched 1 pods for map[app:redis] May 11 14:18:09.496: INFO: Found 0 / 1 May 11 14:18:10.520: INFO: Selector matched 1 pods for map[app:redis] May 11 14:18:10.520: INFO: Found 0 / 1 May 11 14:18:11.320: INFO: Selector matched 1 pods for map[app:redis] May 11 14:18:11.320: INFO: Found 0 / 1 May 11 14:18:12.320: INFO: Selector matched 1 pods for map[app:redis] May 11 14:18:12.320: INFO: Found 1 / 1 May 11 14:18:12.320: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 14:18:12.324: INFO: Selector matched 1 pods for map[app:redis] May 11 14:18:12.324: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 14:18:12.324: INFO: wait on redis-master startup in kubectl-7283 May 11 14:18:12.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-nqqjk redis-master --namespace=kubectl-7283' May 11 14:18:12.434: INFO: stderr: "" May 11 14:18:12.434: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 May 14:18:11.106 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 May 14:18:11.106 # Server started, Redis version 3.2.12\n1:M 11 May 14:18:11.106 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 May 14:18:11.106 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 11 14:18:12.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7283' May 11 14:18:12.594: INFO: stderr: "" May 11 14:18:12.594: INFO: stdout: "service/rm2 exposed\n" May 11 14:18:12.665: INFO: Service rm2 in namespace kubectl-7283 found. STEP: exposing service May 11 14:18:14.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7283' May 11 14:18:14.803: INFO: stderr: "" May 11 14:18:14.803: INFO: stdout: "service/rm3 exposed\n" May 11 14:18:14.824: INFO: Service rm3 in namespace kubectl-7283 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:18:16.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7283" for this suite. May 11 14:18:40.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:18:40.992: INFO: namespace kubectl-7283 deletion completed in 24.136015007s • [SLOW TEST:37.077 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:18:40.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 14:18:41.469: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f5fb101-d701-4366-87d3-dd614f807a29" in namespace "projected-9764" to be "success or failure" May 11 14:18:41.508: INFO: Pod "downwardapi-volume-4f5fb101-d701-4366-87d3-dd614f807a29": Phase="Pending", Reason="", readiness=false. Elapsed: 39.258635ms May 11 14:18:43.513: INFO: Pod "downwardapi-volume-4f5fb101-d701-4366-87d3-dd614f807a29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044023914s May 11 14:18:45.516: INFO: Pod "downwardapi-volume-4f5fb101-d701-4366-87d3-dd614f807a29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04760442s STEP: Saw pod success May 11 14:18:45.517: INFO: Pod "downwardapi-volume-4f5fb101-d701-4366-87d3-dd614f807a29" satisfied condition "success or failure" May 11 14:18:45.520: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4f5fb101-d701-4366-87d3-dd614f807a29 container client-container: STEP: delete the pod May 11 14:18:45.677: INFO: Waiting for pod downwardapi-volume-4f5fb101-d701-4366-87d3-dd614f807a29 to disappear May 11 14:18:45.705: INFO: Pod downwardapi-volume-4f5fb101-d701-4366-87d3-dd614f807a29 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:18:45.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9764" for this suite. May 11 14:18:51.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:18:51.874: INFO: namespace projected-9764 deletion completed in 6.166329392s • [SLOW TEST:10.882 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:18:51.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:19:00.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3299" for this suite. May 11 14:19:06.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:19:06.720: INFO: namespace kubelet-test-3299 deletion completed in 6.109744527s • [SLOW TEST:14.845 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:19:06.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 11 14:19:07.027: INFO: Waiting up to 5m0s for pod "var-expansion-f9168a4c-745c-47d8-b227-3a9a360276b5" in namespace "var-expansion-810" to be "success or failure" May 11 14:19:07.215: INFO: Pod "var-expansion-f9168a4c-745c-47d8-b227-3a9a360276b5": Phase="Pending", Reason="", readiness=false. Elapsed: 187.685636ms May 11 14:19:09.401: INFO: Pod "var-expansion-f9168a4c-745c-47d8-b227-3a9a360276b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.374311473s May 11 14:19:11.431: INFO: Pod "var-expansion-f9168a4c-745c-47d8-b227-3a9a360276b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403574503s May 11 14:19:13.434: INFO: Pod "var-expansion-f9168a4c-745c-47d8-b227-3a9a360276b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.40699301s STEP: Saw pod success May 11 14:19:13.434: INFO: Pod "var-expansion-f9168a4c-745c-47d8-b227-3a9a360276b5" satisfied condition "success or failure" May 11 14:19:13.437: INFO: Trying to get logs from node iruya-worker pod var-expansion-f9168a4c-745c-47d8-b227-3a9a360276b5 container dapi-container: STEP: delete the pod May 11 14:19:13.452: INFO: Waiting for pod var-expansion-f9168a4c-745c-47d8-b227-3a9a360276b5 to disappear May 11 14:19:13.502: INFO: Pod var-expansion-f9168a4c-745c-47d8-b227-3a9a360276b5 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:19:13.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-810" for this suite. May 11 14:19:19.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:19:19.602: INFO: namespace var-expansion-810 deletion completed in 6.095281968s • [SLOW TEST:12.881 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:19:19.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-48478059-7cce-4777-8657-f5566b488f7e STEP: Creating a pod to test consume secrets May 11 14:19:19.849: INFO: Waiting up to 5m0s for pod "pod-secrets-a1a01f45-76cc-4a45-a91c-74052163a258" in namespace "secrets-1921" to be "success or failure" May 11 14:19:19.911: INFO: Pod "pod-secrets-a1a01f45-76cc-4a45-a91c-74052163a258": Phase="Pending", Reason="", readiness=false. Elapsed: 61.890701ms May 11 14:19:21.914: INFO: Pod "pod-secrets-a1a01f45-76cc-4a45-a91c-74052163a258": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064624027s May 11 14:19:24.491: INFO: Pod "pod-secrets-a1a01f45-76cc-4a45-a91c-74052163a258": Phase="Pending", Reason="", readiness=false. Elapsed: 4.642026087s May 11 14:19:26.495: INFO: Pod "pod-secrets-a1a01f45-76cc-4a45-a91c-74052163a258": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.645926954s STEP: Saw pod success May 11 14:19:26.495: INFO: Pod "pod-secrets-a1a01f45-76cc-4a45-a91c-74052163a258" satisfied condition "success or failure" May 11 14:19:26.498: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-a1a01f45-76cc-4a45-a91c-74052163a258 container secret-volume-test: STEP: delete the pod May 11 14:19:26.649: INFO: Waiting for pod pod-secrets-a1a01f45-76cc-4a45-a91c-74052163a258 to disappear May 11 14:19:26.700: INFO: Pod pod-secrets-a1a01f45-76cc-4a45-a91c-74052163a258 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:19:26.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1921" for this suite. May 11 14:19:32.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:19:32.894: INFO: namespace secrets-1921 deletion completed in 6.188417548s • [SLOW TEST:13.292 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:19:32.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 11 14:19:33.007: INFO: Waiting up to 5m0s for pod "downward-api-8ee99428-32c5-4d0f-a73f-5a372394e7b5" in namespace "downward-api-9474" to be "success or failure" May 11 14:19:33.043: INFO: Pod "downward-api-8ee99428-32c5-4d0f-a73f-5a372394e7b5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.229299ms May 11 14:19:35.053: INFO: Pod "downward-api-8ee99428-32c5-4d0f-a73f-5a372394e7b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046528556s May 11 14:19:37.056: INFO: Pod "downward-api-8ee99428-32c5-4d0f-a73f-5a372394e7b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049024531s STEP: Saw pod success May 11 14:19:37.056: INFO: Pod "downward-api-8ee99428-32c5-4d0f-a73f-5a372394e7b5" satisfied condition "success or failure" May 11 14:19:37.061: INFO: Trying to get logs from node iruya-worker2 pod downward-api-8ee99428-32c5-4d0f-a73f-5a372394e7b5 container dapi-container: STEP: delete the pod May 11 14:19:37.138: INFO: Waiting for pod downward-api-8ee99428-32c5-4d0f-a73f-5a372394e7b5 to disappear May 11 14:19:37.210: INFO: Pod downward-api-8ee99428-32c5-4d0f-a73f-5a372394e7b5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:19:37.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9474" for this suite. May 11 14:19:43.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:19:43.337: INFO: namespace downward-api-9474 deletion completed in 6.122916643s • [SLOW TEST:10.443 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:19:43.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-f2fe2b32-7069-42e5-8dbc-e264d3ebbd02 STEP: Creating a pod to test consume secrets May 11 14:19:43.582: INFO: Waiting up to 5m0s for pod "pod-secrets-d82fe9ad-31b0-49be-8240-6b5673c36d3e" in namespace "secrets-7367" to be "success or failure" May 11 14:19:43.587: INFO: Pod "pod-secrets-d82fe9ad-31b0-49be-8240-6b5673c36d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.572705ms May 11 14:19:45.591: INFO: Pod "pod-secrets-d82fe9ad-31b0-49be-8240-6b5673c36d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009044161s May 11 14:19:47.595: INFO: Pod "pod-secrets-d82fe9ad-31b0-49be-8240-6b5673c36d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012650436s May 11 14:19:49.599: INFO: Pod "pod-secrets-d82fe9ad-31b0-49be-8240-6b5673c36d3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016541312s STEP: Saw pod success May 11 14:19:49.599: INFO: Pod "pod-secrets-d82fe9ad-31b0-49be-8240-6b5673c36d3e" satisfied condition "success or failure" May 11 14:19:49.602: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d82fe9ad-31b0-49be-8240-6b5673c36d3e container secret-volume-test: STEP: delete the pod May 11 14:19:49.791: INFO: Waiting for pod pod-secrets-d82fe9ad-31b0-49be-8240-6b5673c36d3e to disappear May 11 14:19:49.841: INFO: Pod pod-secrets-d82fe9ad-31b0-49be-8240-6b5673c36d3e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:19:49.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7367" for this suite. May 11 14:19:55.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:19:55.950: INFO: namespace secrets-7367 deletion completed in 6.105486344s • [SLOW TEST:12.612 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:19:55.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-87e80c56-473c-4621-ade6-946e7c93ad05 May 11 14:19:56.399: INFO: Pod name my-hostname-basic-87e80c56-473c-4621-ade6-946e7c93ad05: Found 0 pods out of 1 May 11 14:20:01.404: INFO: Pod name my-hostname-basic-87e80c56-473c-4621-ade6-946e7c93ad05: Found 1 pods out of 1 May 11 14:20:01.404: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-87e80c56-473c-4621-ade6-946e7c93ad05" are running May 11 14:20:01.407: INFO: Pod "my-hostname-basic-87e80c56-473c-4621-ade6-946e7c93ad05-vtwrr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 14:19:56 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 14:19:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 14:19:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 14:19:56 +0000 UTC Reason: Message:}]) May 11 14:20:01.407: INFO: Trying to dial the pod May 11 14:20:06.416: INFO: Controller my-hostname-basic-87e80c56-473c-4621-ade6-946e7c93ad05: Got expected result from replica 1 [my-hostname-basic-87e80c56-473c-4621-ade6-946e7c93ad05-vtwrr]: "my-hostname-basic-87e80c56-473c-4621-ade6-946e7c93ad05-vtwrr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:20:06.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7970" for this suite. May 11 14:20:12.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:20:12.634: INFO: namespace replication-controller-7970 deletion completed in 6.214666441s • [SLOW TEST:16.684 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:20:12.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-057206ba-ec9c-4b41-840a-177a11243e52 STEP: Creating secret with name s-test-opt-upd-9db5126a-5479-4c9a-b2de-d1cb41406f81 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-057206ba-ec9c-4b41-840a-177a11243e52 STEP: Updating secret s-test-opt-upd-9db5126a-5479-4c9a-b2de-d1cb41406f81 STEP: Creating secret with name s-test-opt-create-2a032e02-c3a2-4a61-9626-b5c5f2e61278 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:21:35.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1931" for this suite. May 11 14:21:57.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:21:57.453: INFO: namespace secrets-1931 deletion completed in 22.083438926s • [SLOW TEST:104.819 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:21:57.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 11 14:21:57.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8292 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 11 14:22:02.090: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0511 14:22:01.686230 3249 log.go:172] (0xc000956370) (0xc00030a280) Create stream\nI0511 14:22:01.686300 3249 log.go:172] (0xc000956370) (0xc00030a280) Stream added, broadcasting: 1\nI0511 14:22:01.690559 3249 log.go:172] (0xc000956370) Reply frame received for 1\nI0511 14:22:01.690608 3249 log.go:172] (0xc000956370) (0xc0007ec500) Create stream\nI0511 14:22:01.690667 3249 log.go:172] (0xc000956370) (0xc0007ec500) Stream added, broadcasting: 3\nI0511 14:22:01.692233 3249 log.go:172] (0xc000956370) Reply frame received for 3\nI0511 14:22:01.692266 3249 log.go:172] (0xc000956370) (0xc000966000) Create stream\nI0511 14:22:01.692280 3249 log.go:172] (0xc000956370) (0xc000966000) Stream added, broadcasting: 5\nI0511 14:22:01.693295 3249 log.go:172] (0xc000956370) Reply frame received for 5\nI0511 14:22:01.693344 3249 log.go:172] (0xc000956370) (0xc00030a000) Create stream\nI0511 14:22:01.693359 3249 log.go:172] (0xc000956370) (0xc00030a000) Stream added, broadcasting: 7\nI0511 14:22:01.696513 3249 log.go:172] (0xc000956370) Reply frame received for 7\nI0511 14:22:01.696603 3249 log.go:172] (0xc0007ec500) (3) Writing data frame\nI0511 14:22:01.696686 3249 log.go:172] (0xc0007ec500) (3) Writing data frame\nI0511 14:22:01.697529 3249 log.go:172] (0xc000956370) Data frame received for 5\nI0511 14:22:01.697548 3249 log.go:172] (0xc000966000) (5) Data frame handling\nI0511 14:22:01.697568 3249 log.go:172] (0xc000966000) (5) Data frame sent\nI0511 14:22:01.697841 3249 log.go:172] (0xc000956370) Data frame received for 5\nI0511 14:22:01.697856 3249 log.go:172] (0xc000966000) (5) Data frame handling\nI0511 14:22:01.697863 3249 log.go:172] (0xc000966000) (5) Data frame sent\nI0511 14:22:01.747460 3249 log.go:172] (0xc000956370) Data frame received for 5\nI0511 14:22:01.747518 3249 log.go:172] (0xc000966000) (5) Data frame handling\nI0511 14:22:01.747547 3249 log.go:172] (0xc000956370) Data frame received for 7\nI0511 14:22:01.747563 3249 log.go:172] (0xc00030a000) (7) Data frame handling\nI0511 14:22:01.747855 3249 log.go:172] (0xc000956370) Data frame received for 1\nI0511 14:22:01.747875 3249 log.go:172] (0xc00030a280) (1) Data frame handling\nI0511 14:22:01.747892 3249 log.go:172] (0xc00030a280) (1) Data frame sent\nI0511 14:22:01.748065 3249 log.go:172] (0xc000956370) (0xc00030a280) Stream removed, broadcasting: 1\nI0511 14:22:01.748176 3249 log.go:172] (0xc000956370) (0xc00030a280) Stream removed, broadcasting: 1\nI0511 14:22:01.748201 3249 log.go:172] (0xc000956370) (0xc0007ec500) Stream removed, broadcasting: 3\nI0511 14:22:01.748219 3249 log.go:172] (0xc000956370) (0xc000966000) Stream removed, broadcasting: 5\nI0511 14:22:01.748407 3249 log.go:172] (0xc000956370) (0xc00030a000) Stream removed, broadcasting: 7\nI0511 14:22:01.748504 3249 log.go:172] (0xc000956370) Go away received\n" May 11 14:22:02.091: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:22:04.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8292" for this suite. May 11 14:22:14.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:22:14.243: INFO: namespace kubectl-8292 deletion completed in 10.140071493s • [SLOW TEST:16.789 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:22:14.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-5b606bea-5e35-45c4-9da5-e2aba33d3262 STEP: Creating secret with name s-test-opt-upd-8e46598b-15b5-4060-845a-31509e56f1bb STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5b606bea-5e35-45c4-9da5-e2aba33d3262 STEP: Updating secret s-test-opt-upd-8e46598b-15b5-4060-845a-31509e56f1bb STEP: Creating secret with name s-test-opt-create-ef217d15-4b97-4b57-822a-4d386b8ae213 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:23:53.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7343" for this suite. May 11 14:24:17.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:24:17.310: INFO: namespace projected-7343 deletion completed in 24.101996746s • [SLOW TEST:123.067 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:24:17.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0511 14:24:57.918013 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 14:24:57.918: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:24:57.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1152" for this suite. May 11 14:25:05.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:25:06.009: INFO: namespace gc-1152 deletion completed in 8.08773231s • [SLOW TEST:48.698 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:25:06.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-121a286b-ca07-47d1-b979-94ff53ec1a54 in namespace container-probe-3198 May 11 14:25:12.560: INFO: Started pod test-webserver-121a286b-ca07-47d1-b979-94ff53ec1a54 in namespace container-probe-3198 STEP: checking the pod's current state and verifying that restartCount is present May 11 14:25:12.563: INFO: Initial restart count of pod test-webserver-121a286b-ca07-47d1-b979-94ff53ec1a54 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:29:13.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3198" for this suite. May 11 14:29:20.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:29:20.838: INFO: namespace container-probe-3198 deletion completed in 6.304681671s • [SLOW TEST:254.829 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:29:20.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 14:29:29.087: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 14:29:29.098: INFO: Pod pod-with-poststart-http-hook still exists May 11 14:29:31.098: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 14:29:31.195: INFO: Pod pod-with-poststart-http-hook still exists May 11 14:29:33.099: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 14:29:33.103: INFO: Pod pod-with-poststart-http-hook still exists May 11 14:29:35.098: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 14:29:35.102: INFO: Pod pod-with-poststart-http-hook still exists May 11 14:29:37.098: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 14:29:37.102: INFO: Pod pod-with-poststart-http-hook still exists May 11 14:29:39.098: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 14:29:39.101: INFO: Pod pod-with-poststart-http-hook still exists May 11 14:29:41.098: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 14:29:41.102: INFO: Pod pod-with-poststart-http-hook still exists May 11 14:29:43.098: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 14:29:43.103: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:29:43.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4748" for this suite. May 11 14:30:05.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:30:05.190: INFO: namespace container-lifecycle-hook-4748 deletion completed in 22.08293671s • [SLOW TEST:44.352 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:30:05.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 11 14:30:05.324: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3034" to be "success or failure" May 11 14:30:05.342: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.251816ms May 11 14:30:07.346: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02154144s May 11 14:30:09.349: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025052015s May 11 14:30:11.367: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042978501s STEP: Saw pod success May 11 14:30:11.367: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 11 14:30:11.370: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 11 14:30:11.394: INFO: Waiting for pod pod-host-path-test to disappear May 11 14:30:11.399: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:30:11.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3034" for this suite. May 11 14:30:17.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:30:17.503: INFO: namespace hostpath-3034 deletion completed in 6.100441353s • [SLOW TEST:12.312 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:30:17.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-21afd72c-d39e-48bb-bb81-39f7769c5fff STEP: Creating a pod to test consume configMaps May 11 14:30:17.565: INFO: Waiting up to 5m0s for pod "pod-configmaps-90718f60-1d5e-4325-9a45-af7e6006983f" in namespace "configmap-911" to be "success or failure" May 11 14:30:17.589: INFO: Pod "pod-configmaps-90718f60-1d5e-4325-9a45-af7e6006983f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.728107ms May 11 14:30:19.593: INFO: Pod "pod-configmaps-90718f60-1d5e-4325-9a45-af7e6006983f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028171308s May 11 14:30:21.596: INFO: Pod "pod-configmaps-90718f60-1d5e-4325-9a45-af7e6006983f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031037267s May 11 14:30:23.600: INFO: Pod "pod-configmaps-90718f60-1d5e-4325-9a45-af7e6006983f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034951618s STEP: Saw pod success May 11 14:30:23.600: INFO: Pod "pod-configmaps-90718f60-1d5e-4325-9a45-af7e6006983f" satisfied condition "success or failure" May 11 14:30:23.603: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-90718f60-1d5e-4325-9a45-af7e6006983f container configmap-volume-test: STEP: delete the pod May 11 14:30:23.653: INFO: Waiting for pod pod-configmaps-90718f60-1d5e-4325-9a45-af7e6006983f to disappear May 11 14:30:23.669: INFO: Pod pod-configmaps-90718f60-1d5e-4325-9a45-af7e6006983f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:30:23.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-911" for this suite. May 11 14:30:29.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:30:29.848: INFO: namespace configmap-911 deletion completed in 6.175419803s • [SLOW TEST:12.345 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:30:29.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 14:30:30.200: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 11 14:30:30.207: INFO: Number of nodes with available pods: 0 May 11 14:30:30.207: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 11 14:30:30.236: INFO: Number of nodes with available pods: 0 May 11 14:30:30.236: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:31.240: INFO: Number of nodes with available pods: 0 May 11 14:30:31.240: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:32.240: INFO: Number of nodes with available pods: 0 May 11 14:30:32.240: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:33.239: INFO: Number of nodes with available pods: 0 May 11 14:30:33.239: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:34.239: INFO: Number of nodes with available pods: 1 May 11 14:30:34.239: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 11 14:30:34.274: INFO: Number of nodes with available pods: 1 May 11 14:30:34.274: INFO: Number of running nodes: 0, number of available pods: 1 May 11 14:30:35.276: INFO: Number of nodes with available pods: 0 May 11 14:30:35.276: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 11 14:30:35.315: INFO: Number of nodes with available pods: 0 May 11 14:30:35.316: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:36.494: INFO: Number of nodes with available pods: 0 May 11 14:30:36.494: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:37.320: INFO: Number of nodes with available pods: 0 May 11 14:30:37.320: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:38.320: INFO: Number of nodes with available pods: 0 May 11 14:30:38.320: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:39.319: INFO: Number of nodes with available pods: 0 May 11 14:30:39.319: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:40.319: INFO: Number of nodes with available pods: 0 May 11 14:30:40.319: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:41.560: INFO: Number of nodes with available pods: 0 May 11 14:30:41.560: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:42.343: INFO: Number of nodes with available pods: 0 May 11 14:30:42.343: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:43.320: INFO: Number of nodes with available pods: 0 May 11 14:30:43.320: INFO: Node iruya-worker is running more than one daemon pod May 11 14:30:44.380: INFO: Number of nodes with available pods: 1 May 11 14:30:44.380: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9524, will wait for the garbage collector to delete the pods May 11 14:30:44.445: INFO: Deleting DaemonSet.extensions daemon-set took: 5.716596ms May 11 14:30:44.745: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.353004ms May 11 14:30:48.848: INFO: Number of nodes with available pods: 0 May 11 14:30:48.848: INFO: Number of running nodes: 0, number of available pods: 0 May 11 14:30:48.851: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9524/daemonsets","resourceVersion":"10263893"},"items":null} May 11 14:30:48.853: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9524/pods","resourceVersion":"10263893"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:30:48.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9524" for this suite. May 11 14:30:54.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:30:54.975: INFO: namespace daemonsets-9524 deletion completed in 6.070565927s • [SLOW TEST:25.127 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:30:54.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 14:30:59.237: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:30:59.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6471" for this suite. May 11 14:31:05.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:31:05.570: INFO: namespace container-runtime-6471 deletion completed in 6.16733899s • [SLOW TEST:10.596 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:31:05.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 11 14:31:09.691: INFO: Pod pod-hostip-7265acf6-ecea-43ab-8c67-317273a8afac has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:31:09.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9074" for this suite. May 11 14:31:31.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:31:31.767: INFO: namespace pods-9074 deletion completed in 22.071553832s • [SLOW TEST:26.196 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:31:31.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1730 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-1730 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1730 May 11 14:31:31.927: INFO: Found 0 stateful pods, waiting for 1 May 11 14:31:41.931: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 11 14:31:41.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 14:31:46.222: INFO: stderr: "I0511 14:31:46.098691 3272 log.go:172] (0xc000b6f340) (0xc00031f5e0) Create stream\nI0511 14:31:46.098719 3272 log.go:172] (0xc000b6f340) (0xc00031f5e0) Stream added, broadcasting: 1\nI0511 14:31:46.101390 3272 log.go:172] (0xc000b6f340) Reply frame received for 1\nI0511 14:31:46.101442 3272 log.go:172] (0xc000b6f340) (0xc00031e000) Create stream\nI0511 14:31:46.101473 3272 log.go:172] (0xc000b6f340) (0xc00031e000) Stream added, broadcasting: 3\nI0511 14:31:46.102226 3272 log.go:172] (0xc000b6f340) Reply frame received for 3\nI0511 14:31:46.102268 3272 log.go:172] (0xc000b6f340) (0xc0001f2140) Create stream\nI0511 14:31:46.102281 3272 log.go:172] (0xc000b6f340) (0xc0001f2140) Stream added, broadcasting: 5\nI0511 14:31:46.103055 3272 log.go:172] (0xc000b6f340) Reply frame received for 5\nI0511 14:31:46.182221 3272 log.go:172] (0xc000b6f340) Data frame received for 5\nI0511 14:31:46.182243 3272 log.go:172] (0xc0001f2140) (5) Data frame handling\nI0511 14:31:46.182258 3272 log.go:172] (0xc0001f2140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 14:31:46.216313 3272 log.go:172] (0xc000b6f340) Data frame received for 3\nI0511 14:31:46.216348 3272 log.go:172] (0xc00031e000) (3) Data frame handling\nI0511 14:31:46.216369 3272 log.go:172] (0xc00031e000) (3) Data frame sent\nI0511 14:31:46.216524 3272 log.go:172] (0xc000b6f340) Data frame received for 5\nI0511 14:31:46.216542 3272 log.go:172] (0xc0001f2140) (5) Data frame handling\nI0511 14:31:46.216564 3272 log.go:172] (0xc000b6f340) Data frame received for 3\nI0511 14:31:46.216594 3272 log.go:172] (0xc00031e000) (3) Data frame handling\nI0511 14:31:46.218198 3272 log.go:172] (0xc000b6f340) Data frame received for 1\nI0511 14:31:46.218221 3272 log.go:172] (0xc00031f5e0) (1) Data frame handling\nI0511 14:31:46.218230 3272 log.go:172] (0xc00031f5e0) (1) Data frame sent\nI0511 14:31:46.218239 3272 log.go:172] (0xc000b6f340) (0xc00031f5e0) Stream removed, broadcasting: 1\nI0511 14:31:46.218368 3272 log.go:172] (0xc000b6f340) Go away received\nI0511 14:31:46.218521 3272 log.go:172] (0xc000b6f340) (0xc00031f5e0) Stream removed, broadcasting: 1\nI0511 14:31:46.218536 3272 log.go:172] (0xc000b6f340) (0xc00031e000) Stream removed, broadcasting: 3\nI0511 14:31:46.218544 3272 log.go:172] (0xc000b6f340) (0xc0001f2140) Stream removed, broadcasting: 5\n" May 11 14:31:46.222: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 14:31:46.222: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 14:31:46.226: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 14:31:56.230: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 14:31:56.230: INFO: Waiting for statefulset status.replicas updated to 0 May 11 14:31:56.252: INFO: POD NODE PHASE GRACE CONDITIONS May 11 14:31:56.252: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC }] May 11 14:31:56.252: INFO: May 11 14:31:56.252: INFO: StatefulSet ss has not reached scale 3, at 1 May 11 14:31:57.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987862691s May 11 14:31:58.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.960348893s May 11 14:31:59.312: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.95546031s May 11 14:32:00.352: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.927369123s May 11 14:32:01.370: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.888014012s May 11 14:32:02.375: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.869811668s May 11 14:32:03.383: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.864254434s May 11 14:32:04.389: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.856699337s May 11 14:32:05.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 850.804428ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1730 May 11 14:32:06.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:32:06.642: INFO: stderr: "I0511 14:32:06.542837 3304 log.go:172] (0xc000a08420) (0xc0002f46e0) Create stream\nI0511 14:32:06.542885 3304 log.go:172] (0xc000a08420) (0xc0002f46e0) Stream added, broadcasting: 1\nI0511 14:32:06.545524 3304 log.go:172] (0xc000a08420) Reply frame received for 1\nI0511 14:32:06.545598 3304 log.go:172] (0xc000a08420) (0xc00092e000) Create stream\nI0511 14:32:06.545623 3304 log.go:172] (0xc000a08420) (0xc00092e000) Stream added, broadcasting: 3\nI0511 14:32:06.546699 3304 log.go:172] (0xc000a08420) Reply frame received for 3\nI0511 14:32:06.546736 3304 log.go:172] (0xc000a08420) (0xc0009ec000) Create stream\nI0511 14:32:06.546750 3304 log.go:172] (0xc000a08420) (0xc0009ec000) Stream added, broadcasting: 5\nI0511 14:32:06.547987 3304 log.go:172] (0xc000a08420) Reply frame received for 5\nI0511 14:32:06.637793 3304 log.go:172] (0xc000a08420) Data frame received for 5\nI0511 14:32:06.637842 3304 log.go:172] (0xc0009ec000) (5) Data frame handling\nI0511 14:32:06.637857 3304 log.go:172] (0xc0009ec000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0511 14:32:06.637881 3304 log.go:172] (0xc000a08420) Data frame received for 5\nI0511 14:32:06.637963 3304 log.go:172] (0xc0009ec000) (5) Data frame handling\nI0511 14:32:06.638000 3304 log.go:172] (0xc000a08420) Data frame received for 3\nI0511 14:32:06.638030 3304 log.go:172] (0xc00092e000) (3) Data frame handling\nI0511 14:32:06.638067 3304 log.go:172] (0xc00092e000) (3) Data frame sent\nI0511 14:32:06.638083 3304 log.go:172] (0xc000a08420) Data frame received for 3\nI0511 14:32:06.638095 3304 log.go:172] (0xc00092e000) (3) Data frame handling\nI0511 14:32:06.638990 3304 log.go:172] (0xc000a08420) Data frame received for 1\nI0511 14:32:06.639006 3304 log.go:172] (0xc0002f46e0) (1) Data frame handling\nI0511 14:32:06.639014 3304 log.go:172] (0xc0002f46e0) (1) Data frame sent\nI0511 14:32:06.639026 3304 log.go:172] (0xc000a08420) (0xc0002f46e0) Stream removed, broadcasting: 1\nI0511 14:32:06.639040 3304 log.go:172] (0xc000a08420) Go away received\nI0511 14:32:06.639494 3304 log.go:172] (0xc000a08420) (0xc0002f46e0) Stream removed, broadcasting: 1\nI0511 14:32:06.639525 3304 log.go:172] (0xc000a08420) (0xc00092e000) Stream removed, broadcasting: 3\nI0511 14:32:06.639537 3304 log.go:172] (0xc000a08420) (0xc0009ec000) Stream removed, broadcasting: 5\n" May 11 14:32:06.643: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 14:32:06.643: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 14:32:06.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:32:06.860: INFO: stderr: "I0511 14:32:06.767477 3326 log.go:172] (0xc0009c6630) (0xc00062ea00) Create stream\nI0511 14:32:06.767527 3326 log.go:172] (0xc0009c6630) (0xc00062ea00) Stream added, broadcasting: 1\nI0511 14:32:06.770708 3326 log.go:172] (0xc0009c6630) Reply frame received for 1\nI0511 14:32:06.770869 3326 log.go:172] (0xc0009c6630) (0xc0008f4000) Create stream\nI0511 14:32:06.770923 3326 log.go:172] (0xc0009c6630) (0xc0008f4000) Stream added, broadcasting: 3\nI0511 14:32:06.772575 3326 log.go:172] (0xc0009c6630) Reply frame received for 3\nI0511 14:32:06.772612 3326 log.go:172] (0xc0009c6630) (0xc0008f40a0) Create stream\nI0511 14:32:06.772624 3326 log.go:172] (0xc0009c6630) (0xc0008f40a0) Stream added, broadcasting: 5\nI0511 14:32:06.773799 3326 log.go:172] (0xc0009c6630) Reply frame received for 5\nI0511 14:32:06.854046 3326 log.go:172] (0xc0009c6630) Data frame received for 3\nI0511 14:32:06.854068 3326 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0511 14:32:06.854077 3326 log.go:172] (0xc0008f4000) (3) Data frame sent\nI0511 14:32:06.854083 3326 log.go:172] (0xc0009c6630) Data frame received for 3\nI0511 14:32:06.854088 3326 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0511 14:32:06.854247 3326 log.go:172] (0xc0009c6630) Data frame received for 5\nI0511 14:32:06.854275 3326 log.go:172] (0xc0008f40a0) (5) Data frame handling\nI0511 14:32:06.854282 3326 log.go:172] (0xc0008f40a0) (5) Data frame sent\nI0511 14:32:06.854295 3326 log.go:172] (0xc0009c6630) Data frame received for 5\nI0511 14:32:06.854305 3326 log.go:172] (0xc0008f40a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0511 14:32:06.855856 3326 log.go:172] (0xc0009c6630) Data frame received for 1\nI0511 14:32:06.855873 3326 log.go:172] (0xc00062ea00) (1) Data frame handling\nI0511 14:32:06.855883 3326 log.go:172] (0xc00062ea00) (1) Data frame sent\nI0511 14:32:06.855895 3326 log.go:172] (0xc0009c6630) (0xc00062ea00) Stream removed, broadcasting: 1\nI0511 14:32:06.855955 3326 log.go:172] (0xc0009c6630) Go away received\nI0511 14:32:06.856174 3326 log.go:172] (0xc0009c6630) (0xc00062ea00) Stream removed, broadcasting: 1\nI0511 14:32:06.856194 3326 log.go:172] (0xc0009c6630) (0xc0008f4000) Stream removed, broadcasting: 3\nI0511 14:32:06.856204 3326 log.go:172] (0xc0009c6630) (0xc0008f40a0) Stream removed, broadcasting: 5\n" May 11 14:32:06.860: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 14:32:06.860: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 14:32:06.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:32:07.074: INFO: stderr: "I0511 14:32:06.998827 3347 log.go:172] (0xc00095e370) (0xc0007c8640) Create stream\nI0511 14:32:06.998901 3347 log.go:172] (0xc00095e370) (0xc0007c8640) Stream added, broadcasting: 1\nI0511 14:32:07.002107 3347 log.go:172] (0xc00095e370) Reply frame received for 1\nI0511 14:32:07.002153 3347 log.go:172] (0xc00095e370) (0xc0005a60a0) Create stream\nI0511 14:32:07.002168 3347 log.go:172] (0xc00095e370) (0xc0005a60a0) Stream added, broadcasting: 3\nI0511 14:32:07.003106 3347 log.go:172] (0xc00095e370) Reply frame received for 3\nI0511 14:32:07.003151 3347 log.go:172] (0xc00095e370) (0xc0007c86e0) Create stream\nI0511 14:32:07.003175 3347 log.go:172] (0xc00095e370) (0xc0007c86e0) Stream added, broadcasting: 5\nI0511 14:32:07.004012 3347 log.go:172] (0xc00095e370) Reply frame received for 5\nI0511 14:32:07.066734 3347 log.go:172] (0xc00095e370) Data frame received for 3\nI0511 14:32:07.066782 3347 log.go:172] (0xc0005a60a0) (3) Data frame handling\nI0511 14:32:07.066820 3347 log.go:172] (0xc0005a60a0) (3) Data frame sent\nI0511 14:32:07.066848 3347 log.go:172] (0xc00095e370) Data frame received for 5\nI0511 14:32:07.066874 3347 log.go:172] (0xc0007c86e0) (5) Data frame handling\nI0511 14:32:07.066914 3347 log.go:172] (0xc0007c86e0) (5) Data frame sent\nI0511 14:32:07.066941 3347 log.go:172] (0xc00095e370) Data frame received for 5\nI0511 14:32:07.066970 3347 log.go:172] (0xc0007c86e0) (5) Data frame handling\nI0511 14:32:07.067006 3347 log.go:172] (0xc00095e370) Data frame received for 3\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0511 14:32:07.067025 3347 log.go:172] (0xc0005a60a0) (3) Data frame handling\nI0511 14:32:07.068186 3347 log.go:172] (0xc00095e370) Data frame received for 1\nI0511 14:32:07.068215 3347 log.go:172] (0xc0007c8640) (1) Data frame handling\nI0511 14:32:07.068242 3347 log.go:172] (0xc0007c8640) (1) Data frame sent\nI0511 14:32:07.068265 3347 log.go:172] (0xc00095e370) (0xc0007c8640) Stream removed, broadcasting: 1\nI0511 14:32:07.068282 3347 log.go:172] (0xc00095e370) Go away received\nI0511 14:32:07.068704 3347 log.go:172] (0xc00095e370) (0xc0007c8640) Stream removed, broadcasting: 1\nI0511 14:32:07.068730 3347 log.go:172] (0xc00095e370) (0xc0005a60a0) Stream removed, broadcasting: 3\nI0511 14:32:07.068741 3347 log.go:172] (0xc00095e370) (0xc0007c86e0) Stream removed, broadcasting: 5\n" May 11 14:32:07.074: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 11 14:32:07.074: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 11 14:32:07.078: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 11 14:32:17.094: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 14:32:17.094: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 14:32:17.094: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 11 14:32:17.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 14:32:17.316: INFO: stderr: "I0511 14:32:17.228310 3367 log.go:172] (0xc0009b2420) (0xc0003a6820) Create stream\nI0511 14:32:17.228353 3367 log.go:172] (0xc0009b2420) (0xc0003a6820) Stream added, broadcasting: 1\nI0511 14:32:17.230275 3367 log.go:172] (0xc0009b2420) Reply frame received for 1\nI0511 14:32:17.230299 3367 log.go:172] (0xc0009b2420) (0xc000968000) Create stream\nI0511 14:32:17.230310 3367 log.go:172] (0xc0009b2420) (0xc000968000) Stream added, broadcasting: 3\nI0511 14:32:17.231187 3367 log.go:172] (0xc0009b2420) Reply frame received for 3\nI0511 14:32:17.231225 3367 log.go:172] (0xc0009b2420) (0xc00074a000) Create stream\nI0511 14:32:17.231240 3367 log.go:172] (0xc0009b2420) (0xc00074a000) Stream added, broadcasting: 5\nI0511 14:32:17.231980 3367 log.go:172] (0xc0009b2420) Reply frame received for 5\nI0511 14:32:17.310812 3367 log.go:172] (0xc0009b2420) Data frame received for 5\nI0511 14:32:17.310867 3367 log.go:172] (0xc00074a000) (5) Data frame handling\nI0511 14:32:17.310903 3367 log.go:172] (0xc00074a000) (5) Data frame sent\nI0511 14:32:17.310939 3367 log.go:172] (0xc0009b2420) Data frame received for 5\nI0511 14:32:17.310954 3367 log.go:172] (0xc00074a000) (5) Data frame handling\nI0511 14:32:17.310972 3367 log.go:172] (0xc0009b2420) Data frame received for 3\nI0511 14:32:17.311001 3367 log.go:172] (0xc000968000) (3) Data frame handling\nI0511 14:32:17.311032 3367 log.go:172] (0xc000968000) (3) Data frame sent\nI0511 14:32:17.311062 3367 log.go:172] (0xc0009b2420) Data frame received for 3\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 14:32:17.311080 3367 log.go:172] (0xc000968000) (3) Data frame handling\nI0511 14:32:17.312434 3367 log.go:172] (0xc0009b2420) Data frame received for 1\nI0511 14:32:17.312453 3367 log.go:172] (0xc0003a6820) (1) Data frame handling\nI0511 14:32:17.312471 3367 log.go:172] (0xc0003a6820) (1) Data frame sent\nI0511 14:32:17.312489 3367 log.go:172] (0xc0009b2420) (0xc0003a6820) Stream removed, broadcasting: 1\nI0511 14:32:17.312505 3367 log.go:172] (0xc0009b2420) Go away received\nI0511 14:32:17.312922 3367 log.go:172] (0xc0009b2420) (0xc0003a6820) Stream removed, broadcasting: 1\nI0511 14:32:17.312946 3367 log.go:172] (0xc0009b2420) (0xc000968000) Stream removed, broadcasting: 3\nI0511 14:32:17.312955 3367 log.go:172] (0xc0009b2420) (0xc00074a000) Stream removed, broadcasting: 5\n" May 11 14:32:17.317: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 14:32:17.317: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 14:32:17.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 14:32:17.624: INFO: stderr: "I0511 14:32:17.455750 3387 log.go:172] (0xc000742420) (0xc0003c25a0) Create stream\nI0511 14:32:17.455790 3387 log.go:172] (0xc000742420) (0xc0003c25a0) Stream added, broadcasting: 1\nI0511 14:32:17.457488 3387 log.go:172] (0xc000742420) Reply frame received for 1\nI0511 14:32:17.457515 3387 log.go:172] (0xc000742420) (0xc00003a460) Create stream\nI0511 14:32:17.457523 3387 log.go:172] (0xc000742420) (0xc00003a460) Stream added, broadcasting: 3\nI0511 14:32:17.458184 3387 log.go:172] (0xc000742420) Reply frame received for 3\nI0511 14:32:17.458212 3387 log.go:172] (0xc000742420) (0xc0003c2640) Create stream\nI0511 14:32:17.458220 3387 log.go:172] (0xc000742420) (0xc0003c2640) Stream added, broadcasting: 5\nI0511 14:32:17.458873 3387 log.go:172] (0xc000742420) Reply frame received for 5\nI0511 14:32:17.513374 3387 log.go:172] (0xc000742420) Data frame received for 5\nI0511 14:32:17.513392 3387 log.go:172] (0xc0003c2640) (5) Data frame handling\nI0511 14:32:17.513400 3387 log.go:172] (0xc0003c2640) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 14:32:17.619663 3387 log.go:172] (0xc000742420) Data frame received for 3\nI0511 14:32:17.619709 3387 log.go:172] (0xc00003a460) (3) Data frame handling\nI0511 14:32:17.619742 3387 log.go:172] (0xc00003a460) (3) Data frame sent\nI0511 14:32:17.619873 3387 log.go:172] (0xc000742420) Data frame received for 5\nI0511 14:32:17.619894 3387 log.go:172] (0xc0003c2640) (5) Data frame handling\nI0511 14:32:17.620001 3387 log.go:172] (0xc000742420) Data frame received for 3\nI0511 14:32:17.620019 3387 log.go:172] (0xc00003a460) (3) Data frame handling\nI0511 14:32:17.621400 3387 log.go:172] (0xc000742420) Data frame received for 1\nI0511 14:32:17.621411 3387 log.go:172] (0xc0003c25a0) (1) Data frame handling\nI0511 14:32:17.621416 3387 log.go:172] (0xc0003c25a0) (1) Data frame sent\nI0511 14:32:17.621556 3387 log.go:172] (0xc000742420) (0xc0003c25a0) Stream removed, broadcasting: 1\nI0511 14:32:17.621716 3387 log.go:172] (0xc000742420) (0xc0003c25a0) Stream removed, broadcasting: 1\nI0511 14:32:17.621724 3387 log.go:172] (0xc000742420) (0xc00003a460) Stream removed, broadcasting: 3\nI0511 14:32:17.621819 3387 log.go:172] (0xc000742420) Go away received\nI0511 14:32:17.621843 3387 log.go:172] (0xc000742420) (0xc0003c2640) Stream removed, broadcasting: 5\n" May 11 14:32:17.624: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 14:32:17.624: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 14:32:17.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 11 14:32:17.843: INFO: stderr: "I0511 14:32:17.744849 3406 log.go:172] (0xc0009be370) (0xc0006c6820) Create stream\nI0511 14:32:17.744904 3406 log.go:172] (0xc0009be370) (0xc0006c6820) Stream added, broadcasting: 1\nI0511 14:32:17.747129 3406 log.go:172] (0xc0009be370) Reply frame received for 1\nI0511 14:32:17.747168 3406 log.go:172] (0xc0009be370) (0xc0008b8000) Create stream\nI0511 14:32:17.747177 3406 log.go:172] (0xc0009be370) (0xc0008b8000) Stream added, broadcasting: 3\nI0511 14:32:17.748233 3406 log.go:172] (0xc0009be370) Reply frame received for 3\nI0511 14:32:17.748289 3406 log.go:172] (0xc0009be370) (0xc0008b80a0) Create stream\nI0511 14:32:17.748312 3406 log.go:172] (0xc0009be370) (0xc0008b80a0) Stream added, broadcasting: 5\nI0511 14:32:17.748947 3406 log.go:172] (0xc0009be370) Reply frame received for 5\nI0511 14:32:17.811072 3406 log.go:172] (0xc0009be370) Data frame received for 5\nI0511 14:32:17.811097 3406 log.go:172] (0xc0008b80a0) (5) Data frame handling\nI0511 14:32:17.811110 3406 log.go:172] (0xc0008b80a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0511 14:32:17.836087 3406 log.go:172] (0xc0009be370) Data frame received for 5\nI0511 14:32:17.836120 3406 log.go:172] (0xc0008b80a0) (5) Data frame handling\nI0511 14:32:17.836149 3406 log.go:172] (0xc0009be370) Data frame received for 3\nI0511 14:32:17.836196 3406 log.go:172] (0xc0008b8000) (3) Data frame handling\nI0511 14:32:17.836233 3406 log.go:172] (0xc0008b8000) (3) Data frame sent\nI0511 14:32:17.836258 3406 log.go:172] (0xc0009be370) Data frame received for 3\nI0511 14:32:17.836277 3406 log.go:172] (0xc0008b8000) (3) Data frame handling\nI0511 14:32:17.837708 3406 log.go:172] (0xc0009be370) Data frame received for 1\nI0511 14:32:17.837724 3406 log.go:172] (0xc0006c6820) (1) Data frame handling\nI0511 14:32:17.837736 3406 log.go:172] (0xc0006c6820) (1) Data frame sent\nI0511 14:32:17.837750 3406 log.go:172] (0xc0009be370) (0xc0006c6820) Stream removed, broadcasting: 1\nI0511 14:32:17.837764 3406 log.go:172] (0xc0009be370) Go away received\nI0511 14:32:17.838240 3406 log.go:172] (0xc0009be370) (0xc0006c6820) Stream removed, broadcasting: 1\nI0511 14:32:17.838266 3406 log.go:172] (0xc0009be370) (0xc0008b8000) Stream removed, broadcasting: 3\nI0511 14:32:17.838281 3406 log.go:172] (0xc0009be370) (0xc0008b80a0) Stream removed, broadcasting: 5\n" May 11 14:32:17.843: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 11 14:32:17.843: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 11 14:32:17.843: INFO: Waiting for statefulset status.replicas updated to 0 May 11 14:32:17.872: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 11 14:32:27.880: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 14:32:27.880: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 14:32:27.880: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 14:32:27.999: INFO: POD NODE PHASE GRACE CONDITIONS May 11 14:32:27.999: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC }] May 11 14:32:27.999: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC }] May 11 14:32:27.999: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC }] May 11 14:32:27.999: INFO: May 11 14:32:27.999: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 14:32:29.280: INFO: POD NODE PHASE GRACE CONDITIONS May 11 14:32:29.280: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC }] May 11 14:32:29.280: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC }] May 11 14:32:29.280: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC }] May 11 14:32:29.280: INFO: May 11 14:32:29.280: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 14:32:30.298: INFO: POD NODE PHASE GRACE CONDITIONS May 11 14:32:30.298: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC }] May 11 14:32:30.298: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC }] May 11 14:32:30.298: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC }] May 11 14:32:30.298: INFO: May 11 14:32:30.298: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 14:32:31.302: INFO: POD NODE PHASE GRACE CONDITIONS May 11 14:32:31.302: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC }] May 11 14:32:31.302: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC }] May 11 14:32:31.302: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC }] May 11 14:32:31.302: INFO: May 11 14:32:31.302: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 14:32:32.337: INFO: POD NODE PHASE GRACE CONDITIONS May 11 14:32:32.337: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC }] May 11 14:32:32.337: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:56 +0000 UTC }] May 11 14:32:32.338: INFO: May 11 14:32:32.338: INFO: StatefulSet ss has not reached scale 0, at 2 May 11 14:32:33.341: INFO: POD NODE PHASE GRACE CONDITIONS May 11 14:32:33.341: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC }] May 11 14:32:33.341: INFO: May 11 14:32:33.341: INFO: StatefulSet ss has not reached scale 0, at 1 May 11 14:32:34.345: INFO: POD NODE PHASE GRACE CONDITIONS May 11 14:32:34.345: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC }] May 11 14:32:34.345: INFO: May 11 14:32:34.345: INFO: StatefulSet ss has not reached scale 0, at 1 May 11 14:32:35.349: INFO: POD NODE PHASE GRACE CONDITIONS May 11 14:32:35.349: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC }] May 11 14:32:35.349: INFO: May 11 14:32:35.349: INFO: StatefulSet ss has not reached scale 0, at 1 May 11 14:32:36.353: INFO: POD NODE PHASE GRACE CONDITIONS May 11 14:32:36.353: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC }] May 11 14:32:36.353: INFO: May 11 14:32:36.353: INFO: StatefulSet ss has not reached scale 0, at 1 May 11 14:32:37.357: INFO: POD NODE PHASE GRACE CONDITIONS May 11 14:32:37.357: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:32:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:31:32 +0000 UTC }] May 11 14:32:37.357: INFO: May 11 14:32:37.357: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1730 May 11 14:32:38.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:32:38.491: INFO: rc: 1 May 11 14:32:38.491: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0031a40f0 exit status 1 true [0xc001714080 0xc001714098 0xc0017140b0] [0xc001714080 0xc001714098 0xc0017140b0] [0xc001714090 0xc0017140a8] [0xba70e0 0xba70e0] 0xc002036180 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 11 14:32:48.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:32:48.592: INFO: rc: 1 May 11 14:32:48.592: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00294ecf0 exit status 1 true [0xc00035d158 0xc00035d2b8 0xc00035d4e8] [0xc00035d158 0xc00035d2b8 0xc00035d4e8] [0xc00035d290 0xc00035d3c8] [0xba70e0 0xba70e0] 0xc001bc29c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:32:58.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:32:58.678: INFO: rc: 1 May 11 14:32:58.678: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00294ede0 exit status 1 true [0xc00035d638 0xc00035d8f0 0xc00035db30] [0xc00035d638 0xc00035d8f0 0xc00035db30] [0xc00035d798 0xc00035dad0] [0xba70e0 0xba70e0] 0xc001bc2d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:33:08.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:33:08.777: INFO: rc: 1 May 11 14:33:08.777: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00294eed0 exit status 1 true [0xc00035db88 0xc00035dd40 0xc00035ddb8] [0xc00035db88 0xc00035dd40 0xc00035ddb8] [0xc00035dca0 0xc00035dd98] [0xba70e0 0xba70e0] 0xc001bc31a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:33:18.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:33:18.882: INFO: rc: 1 May 11 14:33:18.882: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031a41e0 exit status 1 true [0xc0017140b8 0xc0017140d0 0xc0017140e8] [0xc0017140b8 0xc0017140d0 0xc0017140e8] [0xc0017140c8 0xc0017140e0] [0xba70e0 0xba70e0] 0xc002036480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:33:28.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:33:28.984: INFO: rc: 1 May 11 14:33:28.984: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00294ef90 exit status 1 true [0xc00035ddc0 0xc00035de18 0xc00035de80] [0xc00035ddc0 0xc00035de18 0xc00035de80] [0xc00035ddf0 0xc00035de58] [0xba70e0 0xba70e0] 0xc001bc3560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:33:38.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:33:39.081: INFO: rc: 1 May 11 14:33:39.081: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031a42d0 exit status 1 true [0xc0017140f0 0xc001714108 0xc001714120] [0xc0017140f0 0xc001714108 0xc001714120] [0xc001714100 0xc001714118] [0xba70e0 0xba70e0] 0xc002036780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:33:49.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:33:49.167: INFO: rc: 1 May 11 14:33:49.167: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002ceccf0 exit status 1 true [0xc000997aa8 0xc000997b48 0xc000997d58] [0xc000997aa8 0xc000997b48 0xc000997d58] [0xc000997b00 0xc000997ca0] [0xba70e0 0xba70e0] 0xc001e9aae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:33:59.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:33:59.259: INFO: rc: 1 May 11 14:33:59.259: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00294f050 exit status 1 true [0xc00035dea8 0xc00035df70 0xc00035dfd8] [0xc00035dea8 0xc00035df70 0xc00035dfd8] [0xc00035df20 0xc00035dfc8] [0xba70e0 0xba70e0] 0xc001bc3a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:34:09.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:34:09.337: INFO: rc: 1 May 11 14:34:09.337: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00294f140 exit status 1 true [0xc00035dff8 0xc0009d00a8 0xc0009d02e0] [0xc00035dff8 0xc0009d00a8 0xc0009d02e0] [0xc0009d0088 0xc0009d0288] [0xba70e0 0xba70e0] 0xc001baa000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:34:19.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:34:19.431: INFO: rc: 1 May 11 14:34:19.431: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00294f200 exit status 1 true [0xc0009d0330 0xc0009d0400 0xc0009d0780] [0xc0009d0330 0xc0009d0400 0xc0009d0780] [0xc0009d0390 0xc0009d05c8] [0xba70e0 0xba70e0] 0xc001baa7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:34:29.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:34:29.521: INFO: rc: 1 May 11 14:34:29.521: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031a4420 exit status 1 true [0xc001714128 0xc001714140 0xc001714158] [0xc001714128 0xc001714140 0xc001714158] [0xc001714138 0xc001714150] [0xba70e0 0xba70e0] 0xc002036a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:34:39.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:34:39.632: INFO: rc: 1 May 11 14:34:39.632: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001338090 exit status 1 true [0xc00035c418 0xc00035c978 0xc00035d060] [0xc00035c418 0xc00035c978 0xc00035d060] [0xc00035c7d8 0xc00035cef0] [0xba70e0 0xba70e0] 0xc001bc2540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:34:49.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:34:49.717: INFO: rc: 1 May 11 14:34:49.717: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00220e0c0 exit status 1 true [0xc000186000 0xc0009962e0 0xc000996e80] [0xc000186000 0xc0009962e0 0xc000996e80] [0xc000996040 0xc000996690] [0xba70e0 0xba70e0] 0xc0019d8ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:34:59.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:34:59.803: INFO: rc: 1 May 11 14:34:59.803: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cec090 exit status 1 true [0xc001714000 0xc001714018 0xc001714030] [0xc001714000 0xc001714018 0xc001714030] [0xc001714010 0xc001714028] [0xba70e0 0xba70e0] 0xc001e9a600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:35:09.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:35:09.890: INFO: rc: 1 May 11 14:35:09.890: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00220e1b0 exit status 1 true [0xc000997100 0xc000997598 0xc0009978e0] [0xc000997100 0xc000997598 0xc0009978e0] [0xc000997308 0xc000997790] [0xba70e0 0xba70e0] 0xc0019d9920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:35:19.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:35:19.988: INFO: rc: 1 May 11 14:35:19.988: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00220e2a0 exit status 1 true [0xc000997988 0xc000997b00 0xc000997ca0] [0xc000997988 0xc000997b00 0xc000997ca0] [0xc000997ac8 0xc000997c40] [0xba70e0 0xba70e0] 0xc002036180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:35:29.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:35:30.075: INFO: rc: 1 May 11 14:35:30.075: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00220e360 exit status 1 true [0xc000997d58 0xc000997ea8 0xc000997ff8] [0xc000997d58 0xc000997ea8 0xc000997ff8] [0xc000997e48 0xc000997fe8] [0xba70e0 0xba70e0] 0xc002036480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:35:40.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:35:40.167: INFO: rc: 1 May 11 14:35:40.167: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00220e420 exit status 1 true [0xc0009d0070 0xc0009d0240 0xc0009d0330] [0xc0009d0070 0xc0009d0240 0xc0009d0330] [0xc0009d00a8 0xc0009d02e0] [0xba70e0 0xba70e0] 0xc002036780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:35:50.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:35:50.280: INFO: rc: 1 May 11 14:35:50.280: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00220e510 exit status 1 true [0xc0009d0358 0xc0009d0578 0xc0009d07b0] [0xc0009d0358 0xc0009d0578 0xc0009d07b0] [0xc0009d0400 0xc0009d0780] [0xba70e0 0xba70e0] 0xc002036a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:36:00.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:36:00.379: INFO: rc: 1 May 11 14:36:00.379: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cec1e0 exit status 1 true [0xc001714038 0xc001714050 0xc001714070] [0xc001714038 0xc001714050 0xc001714070] [0xc001714048 0xc001714060] [0xba70e0 0xba70e0] 0xc001e9aba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:36:10.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:36:10.474: INFO: rc: 1 May 11 14:36:10.474: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001338150 exit status 1 true [0xc00035d098 0xc00035d1f0 0xc00035d330] [0xc00035d098 0xc00035d1f0 0xc00035d330] [0xc00035d158 0xc00035d2b8] [0xba70e0 0xba70e0] 0xc001bc2a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:36:20.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:36:20.569: INFO: rc: 1 May 11 14:36:20.569: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001338210 exit status 1 true [0xc00035d3c8 0xc00035d708 0xc00035da10] [0xc00035d3c8 0xc00035d708 0xc00035da10] [0xc00035d638 0xc00035d8f0] [0xba70e0 0xba70e0] 0xc001bc2de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:36:30.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:36:30.666: INFO: rc: 1 May 11 14:36:30.666: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002cec2d0 exit status 1 true [0xc001714078 0xc001714090 0xc0017140a8] [0xc001714078 0xc001714090 0xc0017140a8] [0xc001714088 0xc0017140a0] [0xba70e0 0xba70e0] 0xc001e9b3e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:36:40.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:36:40.760: INFO: rc: 1 May 11 14:36:40.760: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031a40f0 exit status 1 true [0xc0009962e0 0xc000996e80 0xc000997308] [0xc0009962e0 0xc000996e80 0xc000997308] [0xc000996690 0xc0009971f0] [0xba70e0 0xba70e0] 0xc0019d8ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:36:50.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:36:50.849: INFO: rc: 1 May 11 14:36:50.850: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031a41b0 exit status 1 true [0xc000997598 0xc0009978e0 0xc000997ac8] [0xc000997598 0xc0009978e0 0xc000997ac8] [0xc000997790 0xc000997aa8] [0xba70e0 0xba70e0] 0xc0019d9920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:37:00.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:37:01.306: INFO: rc: 1 May 11 14:37:01.307: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031a4270 exit status 1 true [0xc000997b00 0xc000997ca0 0xc000997e48] [0xc000997b00 0xc000997ca0 0xc000997e48] [0xc000997c40 0xc000997e28] [0xba70e0 0xba70e0] 0xc001bc2300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:37:11.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:37:11.402: INFO: rc: 1 May 11 14:37:11.402: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031a4360 exit status 1 true [0xc000997ea8 0xc000997ff8 0xc00035c2e0] [0xc000997ea8 0xc000997ff8 0xc00035c2e0] [0xc000997fe8 0xc000186000] [0xba70e0 0xba70e0] 0xc001bc2900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:37:21.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:37:21.511: INFO: rc: 1 May 11 14:37:21.511: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00220e090 exit status 1 true [0xc001714000 0xc001714018 0xc001714030] [0xc001714000 0xc001714018 0xc001714030] [0xc001714010 0xc001714028] [0xba70e0 0xba70e0] 0xc001e9a600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:37:31.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:37:31.592: INFO: rc: 1 May 11 14:37:31.592: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0031a4450 exit status 1 true [0xc00035c418 0xc00035c978 0xc00035d060] [0xc00035c418 0xc00035c978 0xc00035d060] [0xc00035c7d8 0xc00035cef0] [0xba70e0 0xba70e0] 0xc001bc2c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 14:37:41.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 11 14:37:41.868: INFO: rc: 1 May 11 14:37:41.868: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 11 14:37:41.868: INFO: Scaling statefulset ss to 0 May 11 14:37:41.874: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 11 14:37:41.875: INFO: Deleting all statefulset in ns statefulset-1730 May 11 14:37:41.877: INFO: Scaling statefulset ss to 0 May 11 14:37:41.883: INFO: Waiting for statefulset status.replicas updated to 0 May 11 14:37:41.884: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:37:41.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1730" for this suite. May 11 14:37:47.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:37:47.974: INFO: namespace statefulset-1730 deletion completed in 6.074468328s • [SLOW TEST:376.207 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:37:47.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 11 14:37:48.025: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 11 14:37:48.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7738' May 11 14:37:48.336: INFO: stderr: "" May 11 14:37:48.336: INFO: stdout: "service/redis-slave created\n" May 11 14:37:48.336: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 11 14:37:48.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7738' May 11 14:37:48.612: INFO: stderr: "" May 11 14:37:48.612: INFO: stdout: "service/redis-master created\n" May 11 14:37:48.612: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 11 14:37:48.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7738' May 11 14:37:49.040: INFO: stderr: "" May 11 14:37:49.040: INFO: stdout: "service/frontend created\n" May 11 14:37:49.040: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 11 14:37:49.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7738' May 11 14:37:49.292: INFO: stderr: "" May 11 14:37:49.292: INFO: stdout: "deployment.apps/frontend created\n" May 11 14:37:49.292: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 11 14:37:49.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7738' May 11 14:37:49.683: INFO: stderr: "" May 11 14:37:49.683: INFO: stdout: "deployment.apps/redis-master created\n" May 11 14:37:49.684: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 11 14:37:49.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7738' May 11 14:37:49.933: INFO: stderr: "" May 11 14:37:49.933: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 11 14:37:49.933: INFO: Waiting for all frontend pods to be Running. May 11 14:37:59.984: INFO: Waiting for frontend to serve content. May 11 14:38:00.000: INFO: Trying to add a new entry to the guestbook. May 11 14:38:00.017: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 11 14:38:00.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7738' May 11 14:38:00.211: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 14:38:00.211: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 11 14:38:00.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7738' May 11 14:38:00.365: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 14:38:00.365: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 11 14:38:00.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7738' May 11 14:38:00.675: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 14:38:00.675: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 14:38:00.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7738' May 11 14:38:00.867: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 14:38:00.867: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 14:38:00.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7738' May 11 14:38:00.974: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 14:38:00.974: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 11 14:38:00.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7738' May 11 14:38:01.134: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 14:38:01.134: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:38:01.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7738" for this suite. May 11 14:38:43.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:38:43.275: INFO: namespace kubectl-7738 deletion completed in 42.107174219s • [SLOW TEST:55.301 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:38:43.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 11 14:38:44.101: INFO: PodSpec: initContainers in spec.initContainers May 11 14:39:35.951: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8e98ca51-7ea4-46c7-b555-31200c1d6bb2", GenerateName:"", Namespace:"init-container-839", SelfLink:"/api/v1/namespaces/init-container-839/pods/pod-init-8e98ca51-7ea4-46c7-b555-31200c1d6bb2", UID:"35cb9a56-4644-4955-80d3-44390285724d", ResourceVersion:"10265371", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724804724, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"101587829"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gp6tk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0021c4740), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gp6tk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gp6tk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gp6tk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002998a48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001eaa000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002998ad0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002998af0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002998af8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002998afc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724804725, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724804725, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724804725, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724804724, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.10", StartTime:(*v1.Time)(0xc00126fb20), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019f3730)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019f37a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://986281b2864faa6a970e0cbb959e1abd1eb07e29e5ec88a3914adabae964721d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00126fb60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00126fb40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:39:35.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-839" for this suite. May 11 14:40:06.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:40:06.118: INFO: namespace init-container-839 deletion completed in 30.162329981s • [SLOW TEST:82.842 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:40:06.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 14:40:06.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-866' May 11 14:40:06.356: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 14:40:06.356: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 11 14:40:06.413: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-d5jpf] May 11 14:40:06.414: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-d5jpf" in namespace "kubectl-866" to be "running and ready" May 11 14:40:06.417: INFO: Pod "e2e-test-nginx-rc-d5jpf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.8871ms May 11 14:40:08.610: INFO: Pod "e2e-test-nginx-rc-d5jpf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195912647s May 11 14:40:10.613: INFO: Pod "e2e-test-nginx-rc-d5jpf": Phase="Running", Reason="", readiness=true. Elapsed: 4.199516634s May 11 14:40:10.613: INFO: Pod "e2e-test-nginx-rc-d5jpf" satisfied condition "running and ready" May 11 14:40:10.613: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-d5jpf] May 11 14:40:10.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-866' May 11 14:40:10.767: INFO: stderr: "" May 11 14:40:10.767: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 11 14:40:10.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-866' May 11 14:40:10.878: INFO: stderr: "" May 11 14:40:10.878: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:40:10.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-866" for this suite. May 11 14:40:32.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:40:32.974: INFO: namespace kubectl-866 deletion completed in 22.09399583s • [SLOW TEST:26.856 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:40:32.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:40:39.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3152" for this suite. May 11 14:40:45.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:40:45.328: INFO: namespace namespaces-3152 deletion completed in 6.112270684s STEP: Destroying namespace "nsdeletetest-4739" for this suite. May 11 14:40:45.329: INFO: Namespace nsdeletetest-4739 was already deleted STEP: Destroying namespace "nsdeletetest-4146" for this suite. May 11 14:40:51.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:40:51.410: INFO: namespace nsdeletetest-4146 deletion completed in 6.080429827s • [SLOW TEST:18.435 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:40:51.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 14:40:51.489: INFO: Creating deployment "test-recreate-deployment" May 11 14:40:51.543: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 11 14:40:51.560: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 11 14:40:53.566: INFO: Waiting deployment "test-recreate-deployment" to complete May 11 14:40:53.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724804851, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724804851, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724804851, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724804851, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:40:55.572: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724804851, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724804851, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724804851, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724804851, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:40:57.572: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 11 14:40:57.578: INFO: Updating deployment test-recreate-deployment May 11 14:40:57.578: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 11 14:40:58.506: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-3175,SelfLink:/apis/apps/v1/namespaces/deployment-3175/deployments/test-recreate-deployment,UID:360d9261-2600-4c84-98b1-740037b54257,ResourceVersion:10265653,Generation:2,CreationTimestamp:2020-05-11 14:40:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-11 14:40:57 +0000 UTC 2020-05-11 14:40:57 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-11 14:40:58 +0000 UTC 2020-05-11 14:40:51 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 11 14:40:58.541: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-3175,SelfLink:/apis/apps/v1/namespaces/deployment-3175/replicasets/test-recreate-deployment-5c8c9cc69d,UID:855f7142-04f2-4288-89f3-3aa9c9fb5067,ResourceVersion:10265650,Generation:1,CreationTimestamp:2020-05-11 14:40:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 360d9261-2600-4c84-98b1-740037b54257 0xc00243e5f7 0xc00243e5f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 14:40:58.541: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 11 14:40:58.541: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-3175,SelfLink:/apis/apps/v1/namespaces/deployment-3175/replicasets/test-recreate-deployment-6df85df6b9,UID:9234ace9-50d3-4be2-b730-5eafc8f2edf3,ResourceVersion:10265642,Generation:2,CreationTimestamp:2020-05-11 14:40:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 360d9261-2600-4c84-98b1-740037b54257 0xc00243e6c7 0xc00243e6c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 14:40:58.783: INFO: Pod "test-recreate-deployment-5c8c9cc69d-8x5fh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-8x5fh,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-3175,SelfLink:/api/v1/namespaces/deployment-3175/pods/test-recreate-deployment-5c8c9cc69d-8x5fh,UID:80a7b3cd-f0ea-4a9a-9ce4-4208067532ee,ResourceVersion:10265654,Generation:0,CreationTimestamp:2020-05-11 14:40:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 855f7142-04f2-4288-89f3-3aa9c9fb5067 0xc00243efb7 0xc00243efb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hvlgd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hvlgd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hvlgd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00243f030} {node.kubernetes.io/unreachable Exists NoExecute 0xc00243f050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:40:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:40:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:40:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:40:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-11 14:40:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:40:58.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3175" for this suite. May 11 14:41:04.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:41:05.010: INFO: namespace deployment-3175 deletion completed in 6.222491398s • [SLOW TEST:13.601 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:41:05.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 11 14:41:05.808: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2274,SelfLink:/api/v1/namespaces/watch-2274/configmaps/e2e-watch-test-resource-version,UID:65d2bf04-738f-46c9-8968-318771016b7d,ResourceVersion:10265702,Generation:0,CreationTimestamp:2020-05-11 14:41:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 14:41:05.808: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2274,SelfLink:/api/v1/namespaces/watch-2274/configmaps/e2e-watch-test-resource-version,UID:65d2bf04-738f-46c9-8968-318771016b7d,ResourceVersion:10265703,Generation:0,CreationTimestamp:2020-05-11 14:41:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:41:05.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2274" for this suite. May 11 14:41:11.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:41:12.043: INFO: namespace watch-2274 deletion completed in 6.231754969s • [SLOW TEST:7.032 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:41:12.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-j84x STEP: Creating a pod to test atomic-volume-subpath May 11 14:41:12.215: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-j84x" in namespace "subpath-1855" to be "success or failure" May 11 14:41:12.220: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.543817ms May 11 14:41:14.298: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083284626s May 11 14:41:16.301: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Running", Reason="", readiness=true. Elapsed: 4.086238445s May 11 14:41:18.305: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Running", Reason="", readiness=true. Elapsed: 6.090226833s May 11 14:41:20.309: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Running", Reason="", readiness=true. Elapsed: 8.093366327s May 11 14:41:22.312: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Running", Reason="", readiness=true. Elapsed: 10.096970702s May 11 14:41:24.316: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Running", Reason="", readiness=true. Elapsed: 12.100898106s May 11 14:41:26.320: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Running", Reason="", readiness=true. Elapsed: 14.104935285s May 11 14:41:28.323: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Running", Reason="", readiness=true. Elapsed: 16.108232265s May 11 14:41:30.327: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Running", Reason="", readiness=true. Elapsed: 18.112037538s May 11 14:41:32.331: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Running", Reason="", readiness=true. Elapsed: 20.115780057s May 11 14:41:34.334: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Running", Reason="", readiness=true. Elapsed: 22.118371395s May 11 14:41:36.338: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Running", Reason="", readiness=true. Elapsed: 24.122734812s May 11 14:41:38.341: INFO: Pod "pod-subpath-test-secret-j84x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.125639979s STEP: Saw pod success May 11 14:41:38.341: INFO: Pod "pod-subpath-test-secret-j84x" satisfied condition "success or failure" May 11 14:41:38.343: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-j84x container test-container-subpath-secret-j84x: STEP: delete the pod May 11 14:41:38.362: INFO: Waiting for pod pod-subpath-test-secret-j84x to disappear May 11 14:41:38.367: INFO: Pod pod-subpath-test-secret-j84x no longer exists STEP: Deleting pod pod-subpath-test-secret-j84x May 11 14:41:38.367: INFO: Deleting pod "pod-subpath-test-secret-j84x" in namespace "subpath-1855" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:41:38.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1855" for this suite. May 11 14:41:44.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:41:44.457: INFO: namespace subpath-1855 deletion completed in 6.085970747s • [SLOW TEST:32.414 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:41:44.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 14:41:44.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5928' May 11 14:41:47.024: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 14:41:47.024: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 11 14:41:47.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5928' May 11 14:41:48.070: INFO: stderr: "" May 11 14:41:48.070: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:41:48.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5928" for this suite. May 11 14:42:10.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:42:10.299: INFO: namespace kubectl-5928 deletion completed in 22.224520786s • [SLOW TEST:25.841 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:42:10.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:42:10.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6375" for this suite. May 11 14:42:32.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:42:32.674: INFO: namespace pods-6375 deletion completed in 22.244226628s • [SLOW TEST:22.375 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:42:32.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-7e9d3106-dd91-4f4f-ba30-64b30454bcef STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-7e9d3106-dd91-4f4f-ba30-64b30454bcef STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:44:05.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5696" for this suite. May 11 14:44:27.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:44:27.717: INFO: namespace projected-5696 deletion completed in 22.104504098s • [SLOW TEST:115.043 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:44:27.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8153.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8153.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8153.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8153.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8153.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8153.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8153.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8153.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8153.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8153.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 127.41.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.41.127_udp@PTR;check="$$(dig +tcp +noall +answer +search 127.41.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.41.127_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8153.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8153.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8153.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8153.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8153.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8153.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8153.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8153.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8153.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8153.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8153.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 127.41.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.41.127_udp@PTR;check="$$(dig +tcp +noall +answer +search 127.41.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.41.127_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 14:44:36.077: INFO: Unable to read wheezy_udp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:36.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:36.081: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:36.084: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:36.104: INFO: Unable to read jessie_udp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:36.107: INFO: Unable to read jessie_tcp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:36.111: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:36.114: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:36.131: INFO: Lookups using dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce failed for: [wheezy_udp@dns-test-service.dns-8153.svc.cluster.local wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local jessie_udp@dns-test-service.dns-8153.svc.cluster.local jessie_tcp@dns-test-service.dns-8153.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local] May 11 14:44:41.194: INFO: Unable to read wheezy_udp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:41.196: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:41.198: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:41.199: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:41.210: INFO: Unable to read jessie_udp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:41.212: INFO: Unable to read jessie_tcp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:41.214: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:41.216: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:41.226: INFO: Lookups using dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce failed for: [wheezy_udp@dns-test-service.dns-8153.svc.cluster.local wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local jessie_udp@dns-test-service.dns-8153.svc.cluster.local jessie_tcp@dns-test-service.dns-8153.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local] May 11 14:44:46.135: INFO: Unable to read wheezy_udp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:46.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:46.141: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:46.144: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:46.159: INFO: Unable to read jessie_udp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:46.162: INFO: Unable to read jessie_tcp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:46.164: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:46.166: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:46.181: INFO: Lookups using dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce failed for: [wheezy_udp@dns-test-service.dns-8153.svc.cluster.local wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local jessie_udp@dns-test-service.dns-8153.svc.cluster.local jessie_tcp@dns-test-service.dns-8153.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local] May 11 14:44:51.136: INFO: Unable to read wheezy_udp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:51.141: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:51.144: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:51.147: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:51.215: INFO: Unable to read jessie_udp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:51.217: INFO: Unable to read jessie_tcp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:51.219: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:51.221: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:51.233: INFO: Lookups using dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce failed for: [wheezy_udp@dns-test-service.dns-8153.svc.cluster.local wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local jessie_udp@dns-test-service.dns-8153.svc.cluster.local jessie_tcp@dns-test-service.dns-8153.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local] May 11 14:44:56.137: INFO: Unable to read wheezy_udp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:56.141: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:56.144: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:56.147: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:56.166: INFO: Unable to read jessie_udp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:56.169: INFO: Unable to read jessie_tcp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:56.171: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:56.175: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:44:56.191: INFO: Lookups using dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce failed for: [wheezy_udp@dns-test-service.dns-8153.svc.cluster.local wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local jessie_udp@dns-test-service.dns-8153.svc.cluster.local jessie_tcp@dns-test-service.dns-8153.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local] May 11 14:45:01.136: INFO: Unable to read wheezy_udp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:45:01.140: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:45:01.143: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:45:01.147: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:45:01.169: INFO: Unable to read jessie_udp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:45:01.172: INFO: Unable to read jessie_tcp@dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:45:01.176: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:45:01.178: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local from pod dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce: the server could not find the requested resource (get pods dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce) May 11 14:45:01.197: INFO: Lookups using dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce failed for: [wheezy_udp@dns-test-service.dns-8153.svc.cluster.local wheezy_tcp@dns-test-service.dns-8153.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local jessie_udp@dns-test-service.dns-8153.svc.cluster.local jessie_tcp@dns-test-service.dns-8153.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8153.svc.cluster.local] May 11 14:45:06.211: INFO: DNS probes using dns-8153/dns-test-156406f4-00de-4ce2-bd97-c9b7e457e9ce succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:45:07.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8153" for this suite. May 11 14:45:13.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:45:13.642: INFO: namespace dns-8153 deletion completed in 6.185189266s • [SLOW TEST:45.924 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:45:13.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0511 14:45:14.876013 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 14:45:14.876: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:45:14.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-956" for this suite. May 11 14:45:22.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:45:22.957: INFO: namespace gc-956 deletion completed in 8.079074084s • [SLOW TEST:9.315 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:45:22.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-729efb86-1d86-4236-98ca-a5fd581241c1 STEP: Creating a pod to test consume secrets May 11 14:45:23.050: INFO: Waiting up to 5m0s for pod "pod-secrets-d6c4d192-2ac7-4b67-bbd0-c1f9b4af93cd" in namespace "secrets-5686" to be "success or failure" May 11 14:45:23.067: INFO: Pod "pod-secrets-d6c4d192-2ac7-4b67-bbd0-c1f9b4af93cd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.942326ms May 11 14:45:25.255: INFO: Pod "pod-secrets-d6c4d192-2ac7-4b67-bbd0-c1f9b4af93cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204593844s May 11 14:45:27.259: INFO: Pod "pod-secrets-d6c4d192-2ac7-4b67-bbd0-c1f9b4af93cd": Phase="Running", Reason="", readiness=true. Elapsed: 4.20901555s May 11 14:45:29.262: INFO: Pod "pod-secrets-d6c4d192-2ac7-4b67-bbd0-c1f9b4af93cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.212203548s STEP: Saw pod success May 11 14:45:29.262: INFO: Pod "pod-secrets-d6c4d192-2ac7-4b67-bbd0-c1f9b4af93cd" satisfied condition "success or failure" May 11 14:45:29.265: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d6c4d192-2ac7-4b67-bbd0-c1f9b4af93cd container secret-volume-test: STEP: delete the pod May 11 14:45:29.284: INFO: Waiting for pod pod-secrets-d6c4d192-2ac7-4b67-bbd0-c1f9b4af93cd to disappear May 11 14:45:29.287: INFO: Pod pod-secrets-d6c4d192-2ac7-4b67-bbd0-c1f9b4af93cd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:45:29.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5686" for this suite. May 11 14:45:35.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:45:35.355: INFO: namespace secrets-5686 deletion completed in 6.065604078s • [SLOW TEST:12.397 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:45:35.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 11 14:45:36.209: INFO: Pod name wrapped-volume-race-1656f69d-31bb-41a4-9351-d66f30cde515: Found 0 pods out of 5 May 11 14:45:41.217: INFO: Pod name wrapped-volume-race-1656f69d-31bb-41a4-9351-d66f30cde515: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1656f69d-31bb-41a4-9351-d66f30cde515 in namespace emptydir-wrapper-5694, will wait for the garbage collector to delete the pods May 11 14:45:57.314: INFO: Deleting ReplicationController wrapped-volume-race-1656f69d-31bb-41a4-9351-d66f30cde515 took: 25.118596ms May 11 14:45:57.714: INFO: Terminating ReplicationController wrapped-volume-race-1656f69d-31bb-41a4-9351-d66f30cde515 pods took: 400.157703ms STEP: Creating RC which spawns configmap-volume pods May 11 14:46:42.449: INFO: Pod name wrapped-volume-race-9a2e98ca-07b6-44d1-8f67-35fa6de85f1e: Found 0 pods out of 5 May 11 14:46:47.727: INFO: Pod name wrapped-volume-race-9a2e98ca-07b6-44d1-8f67-35fa6de85f1e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9a2e98ca-07b6-44d1-8f67-35fa6de85f1e in namespace emptydir-wrapper-5694, will wait for the garbage collector to delete the pods May 11 14:47:03.843: INFO: Deleting ReplicationController wrapped-volume-race-9a2e98ca-07b6-44d1-8f67-35fa6de85f1e took: 7.441281ms May 11 14:47:04.143: INFO: Terminating ReplicationController wrapped-volume-race-9a2e98ca-07b6-44d1-8f67-35fa6de85f1e pods took: 300.17955ms STEP: Creating RC which spawns configmap-volume pods May 11 14:47:41.538: INFO: Pod name wrapped-volume-race-5284b2a4-db56-4174-8321-b74bed310924: Found 0 pods out of 5 May 11 14:47:46.546: INFO: Pod name wrapped-volume-race-5284b2a4-db56-4174-8321-b74bed310924: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5284b2a4-db56-4174-8321-b74bed310924 in namespace emptydir-wrapper-5694, will wait for the garbage collector to delete the pods May 11 14:48:02.644: INFO: Deleting ReplicationController wrapped-volume-race-5284b2a4-db56-4174-8321-b74bed310924 took: 7.265134ms May 11 14:48:03.044: INFO: Terminating ReplicationController wrapped-volume-race-5284b2a4-db56-4174-8321-b74bed310924 pods took: 400.231467ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:48:55.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5694" for this suite. May 11 14:49:05.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:49:05.958: INFO: namespace emptydir-wrapper-5694 deletion completed in 10.400439052s • [SLOW TEST:210.603 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:49:05.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 14:49:10.807: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:49:11.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5747" for this suite. May 11 14:49:17.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:49:17.162: INFO: namespace container-runtime-5747 deletion completed in 6.099295464s • [SLOW TEST:11.204 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:49:17.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 11 14:49:17.214: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:49:23.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8301" for this suite. May 11 14:49:30.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:49:30.474: INFO: namespace init-container-8301 deletion completed in 6.167584024s • [SLOW TEST:13.311 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:49:30.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-7509 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7509 STEP: Deleting pre-stop pod May 11 14:49:43.755: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:49:43.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7509" for this suite. May 11 14:50:25.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:50:25.901: INFO: namespace prestop-7509 deletion completed in 42.109993082s • [SLOW TEST:55.426 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:50:25.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 14:50:26.162: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 11 14:50:33.295: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 14:50:54.858: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 11 14:50:56.861: INFO: Creating deployment "test-rollover-deployment" May 11 14:50:57.630: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 11 14:51:01.160: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 11 14:51:01.407: INFO: Ensure that both replica sets have 1 created replica May 11 14:51:01.412: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 11 14:51:01.417: INFO: Updating deployment test-rollover-deployment May 11 14:51:01.417: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 11 14:51:04.348: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 11 14:51:04.657: INFO: Make sure deployment "test-rollover-deployment" is complete May 11 14:51:04.662: INFO: all replica sets need to contain the pod-template-hash label May 11 14:51:04.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805463, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:51:06.833: INFO: all replica sets need to contain the pod-template-hash label May 11 14:51:06.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805463, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:51:08.669: INFO: all replica sets need to contain the pod-template-hash label May 11 14:51:08.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805463, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:51:10.779: INFO: all replica sets need to contain the pod-template-hash label May 11 14:51:10.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805463, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:51:13.192: INFO: all replica sets need to contain the pod-template-hash label May 11 14:51:13.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805470, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:51:14.981: INFO: all replica sets need to contain the pod-template-hash label May 11 14:51:14.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805470, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:51:16.672: INFO: all replica sets need to contain the pod-template-hash label May 11 14:51:16.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805470, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:51:18.670: INFO: all replica sets need to contain the pod-template-hash label May 11 14:51:18.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805470, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:51:20.673: INFO: all replica sets need to contain the pod-template-hash label May 11 14:51:20.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805470, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:51:23.943: INFO: May 11 14:51:23.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805481, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805457, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:51:25.072: INFO: May 11 14:51:25.072: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 11 14:51:26.178: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-1485,SelfLink:/apis/apps/v1/namespaces/deployment-1485/deployments/test-rollover-deployment,UID:d9f863b9-2cef-4f85-9f87-916d0c6f6ca4,ResourceVersion:10268121,Generation:2,CreationTimestamp:2020-05-11 14:50:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-11 14:50:57 +0000 UTC 2020-05-11 14:50:57 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-11 14:51:23 +0000 UTC 2020-05-11 14:50:57 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 11 14:51:26.784: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-1485,SelfLink:/apis/apps/v1/namespaces/deployment-1485/replicasets/test-rollover-deployment-854595fc44,UID:96dd4cbf-4281-4138-8a23-cfb6d2ad3adb,ResourceVersion:10268107,Generation:2,CreationTimestamp:2020-05-11 14:51:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d9f863b9-2cef-4f85-9f87-916d0c6f6ca4 0xc00280bd57 0xc00280bd58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 11 14:51:26.784: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 11 14:51:26.784: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-1485,SelfLink:/apis/apps/v1/namespaces/deployment-1485/replicasets/test-rollover-controller,UID:26947481-90ed-4e24-9004-55d2c65d3881,ResourceVersion:10268119,Generation:2,CreationTimestamp:2020-05-11 14:50:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d9f863b9-2cef-4f85-9f87-916d0c6f6ca4 0xc00280bc77 0xc00280bc78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 14:51:26.785: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-1485,SelfLink:/apis/apps/v1/namespaces/deployment-1485/replicasets/test-rollover-deployment-9b8b997cf,UID:6fdb843f-8f3f-48d5-a440-3bfa9d886d80,ResourceVersion:10268064,Generation:2,CreationTimestamp:2020-05-11 14:50:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d9f863b9-2cef-4f85-9f87-916d0c6f6ca4 0xc00280be20 0xc00280be21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 14:51:27.172: INFO: Pod "test-rollover-deployment-854595fc44-xmfgs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-xmfgs,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-1485,SelfLink:/api/v1/namespaces/deployment-1485/pods/test-rollover-deployment-854595fc44-xmfgs,UID:476c335c-68cf-4b73-b786-648b937255ac,ResourceVersion:10268085,Generation:0,CreationTimestamp:2020-05-11 14:51:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 96dd4cbf-4281-4138-8a23-cfb6d2ad3adb 0xc002698a07 0xc002698a08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cfdcb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfdcb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-cfdcb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002698a80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002698aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:51:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:51:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:51:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:51:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.32,StartTime:2020-05-11 14:51:03 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-11 14:51:10 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://a868d5fa1b24c7c144587b16c8753c5c64c19e5c9d02b9a71b4babeaf57209b8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:51:27.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1485" for this suite. May 11 14:51:40.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:51:40.378: INFO: namespace deployment-1485 deletion completed in 12.71705355s • [SLOW TEST:74.477 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:51:40.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 14:51:40.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4550' May 11 14:51:40.926: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 14:51:40.926: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 11 14:51:45.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4550' May 11 14:52:02.312: INFO: stderr: "" May 11 14:52:02.312: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:52:02.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4550" for this suite. May 11 14:52:26.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:52:26.704: INFO: namespace kubectl-4550 deletion completed in 24.388420256s • [SLOW TEST:46.325 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:52:26.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wl756 in namespace proxy-6304 I0511 14:52:27.680075 7 runners.go:180] Created replication controller with name: proxy-service-wl756, namespace: proxy-6304, replica count: 1 I0511 14:52:28.730477 7 runners.go:180] proxy-service-wl756 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 14:52:29.730671 7 runners.go:180] proxy-service-wl756 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 14:52:30.730872 7 runners.go:180] proxy-service-wl756 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 14:52:31.731067 7 runners.go:180] proxy-service-wl756 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 14:52:32.731244 7 runners.go:180] proxy-service-wl756 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 14:52:33.731434 7 runners.go:180] proxy-service-wl756 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 14:52:33.763: INFO: setup took 6.379212864s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 11 14:52:33.768: INFO: (0) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 4.705396ms) May 11 14:52:33.770: INFO: (0) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 7.051951ms) May 11 14:52:33.770: INFO: (0) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 6.918603ms) May 11 14:52:33.772: INFO: (0) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 9.151866ms) May 11 14:52:33.772: INFO: (0) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 9.530473ms) May 11 14:52:33.773: INFO: (0) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 9.567815ms) May 11 14:52:33.775: INFO: (0) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 11.689148ms) May 11 14:52:33.775: INFO: (0) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 11.515274ms) May 11 14:52:33.775: INFO: (0) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 11.640017ms) May 11 14:52:33.775: INFO: (0) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 11.739389ms) May 11 14:52:33.775: INFO: (0) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 11.808209ms) May 11 14:52:33.779: INFO: (0) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 16.133018ms) May 11 14:52:33.779: INFO: (0) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:460/proxy/: tls baz (200; 16.185227ms) May 11 14:52:33.779: INFO: (0) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 16.171862ms) May 11 14:52:33.779: INFO: (0) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: test (200; 6.012871ms) May 11 14:52:33.790: INFO: (1) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 6.051173ms) May 11 14:52:33.791: INFO: (1) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 6.397377ms) May 11 14:52:33.791: INFO: (1) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 6.887249ms) May 11 14:52:33.791: INFO: (1) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 6.857169ms) May 11 14:52:33.791: INFO: (1) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 7.001698ms) May 11 14:52:33.791: INFO: (1) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 6.929715ms) May 11 14:52:33.791: INFO: (1) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 6.962485ms) May 11 14:52:33.791: INFO: (1) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname2/proxy/: tls qux (200; 6.984792ms) May 11 14:52:33.792: INFO: (1) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 7.414467ms) May 11 14:52:33.792: INFO: (1) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: ... (200; 5.151744ms) May 11 14:52:33.798: INFO: (2) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 5.605138ms) May 11 14:52:33.798: INFO: (2) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 5.948546ms) May 11 14:52:33.798: INFO: (2) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 6.113202ms) May 11 14:52:33.798: INFO: (2) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname2/proxy/: tls qux (200; 6.272299ms) May 11 14:52:33.798: INFO: (2) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 6.250688ms) May 11 14:52:33.798: INFO: (2) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 6.258313ms) May 11 14:52:33.798: INFO: (2) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 6.287384ms) May 11 14:52:33.798: INFO: (2) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 6.285266ms) May 11 14:52:33.799: INFO: (2) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 6.37904ms) May 11 14:52:33.799: INFO: (2) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 6.388566ms) May 11 14:52:33.802: INFO: (3) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: ... (200; 4.053673ms) May 11 14:52:33.803: INFO: (3) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 4.119379ms) May 11 14:52:33.803: INFO: (3) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 4.24833ms) May 11 14:52:33.803: INFO: (3) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 4.682483ms) May 11 14:52:33.803: INFO: (3) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 4.76815ms) May 11 14:52:33.803: INFO: (3) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 4.761277ms) May 11 14:52:33.803: INFO: (3) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 4.767562ms) May 11 14:52:33.803: INFO: (3) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 4.857239ms) May 11 14:52:33.803: INFO: (3) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 4.889955ms) May 11 14:52:33.804: INFO: (3) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:460/proxy/: tls baz (200; 4.924302ms) May 11 14:52:33.804: INFO: (3) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 5.510384ms) May 11 14:52:33.805: INFO: (3) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 6.337547ms) May 11 14:52:33.809: INFO: (4) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 3.921925ms) May 11 14:52:33.809: INFO: (4) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 3.994587ms) May 11 14:52:33.809: INFO: (4) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 4.090238ms) May 11 14:52:33.809: INFO: (4) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 4.31213ms) May 11 14:52:33.810: INFO: (4) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 4.694216ms) May 11 14:52:33.810: INFO: (4) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 4.615274ms) May 11 14:52:33.810: INFO: (4) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 5.063777ms) May 11 14:52:33.810: INFO: (4) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 5.093555ms) May 11 14:52:33.810: INFO: (4) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: ... (200; 4.582877ms) May 11 14:52:33.816: INFO: (5) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 4.593008ms) May 11 14:52:33.816: INFO: (5) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 4.870601ms) May 11 14:52:33.816: INFO: (5) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 5.000931ms) May 11 14:52:33.816: INFO: (5) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 4.955481ms) May 11 14:52:33.816: INFO: (5) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 5.059829ms) May 11 14:52:33.816: INFO: (5) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 5.177861ms) May 11 14:52:33.819: INFO: (5) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 7.711574ms) May 11 14:52:33.819: INFO: (5) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 7.960296ms) May 11 14:52:33.819: INFO: (5) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 8.013746ms) May 11 14:52:33.819: INFO: (5) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname2/proxy/: tls qux (200; 7.94377ms) May 11 14:52:33.819: INFO: (5) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 7.910047ms) May 11 14:52:33.819: INFO: (5) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 7.977369ms) May 11 14:52:33.821: INFO: (6) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:460/proxy/: tls baz (200; 1.92075ms) May 11 14:52:33.823: INFO: (6) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 3.699025ms) May 11 14:52:33.823: INFO: (6) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 4.036057ms) May 11 14:52:33.823: INFO: (6) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 4.213348ms) May 11 14:52:33.823: INFO: (6) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 4.356783ms) May 11 14:52:33.823: INFO: (6) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 4.345244ms) May 11 14:52:33.823: INFO: (6) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 4.374333ms) May 11 14:52:33.824: INFO: (6) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 4.520465ms) May 11 14:52:33.824: INFO: (6) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 4.637861ms) May 11 14:52:33.824: INFO: (6) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: test<... (200; 3.330076ms) May 11 14:52:33.830: INFO: (7) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 3.45026ms) May 11 14:52:33.830: INFO: (7) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 3.834126ms) May 11 14:52:33.831: INFO: (7) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 4.259311ms) May 11 14:52:33.832: INFO: (7) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 5.458371ms) May 11 14:52:33.832: INFO: (7) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 5.444714ms) May 11 14:52:33.832: INFO: (7) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 5.597168ms) May 11 14:52:33.832: INFO: (7) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 5.642479ms) May 11 14:52:33.832: INFO: (7) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: test (200; 6.128112ms) May 11 14:52:33.846: INFO: (8) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 12.306077ms) May 11 14:52:33.846: INFO: (8) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 12.355469ms) May 11 14:52:33.846: INFO: (8) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 12.426987ms) May 11 14:52:33.846: INFO: (8) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:460/proxy/: tls baz (200; 12.653758ms) May 11 14:52:33.846: INFO: (8) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 12.603007ms) May 11 14:52:33.846: INFO: (8) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 12.854778ms) May 11 14:52:33.846: INFO: (8) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 12.899761ms) May 11 14:52:33.846: INFO: (8) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 12.995007ms) May 11 14:52:33.846: INFO: (8) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: test (200; 2.974051ms) May 11 14:52:33.851: INFO: (9) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 3.126576ms) May 11 14:52:33.851: INFO: (9) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 3.079222ms) May 11 14:52:33.851: INFO: (9) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 3.187422ms) May 11 14:52:33.852: INFO: (9) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 4.011162ms) May 11 14:52:33.852: INFO: (9) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:460/proxy/: tls baz (200; 4.011555ms) May 11 14:52:33.852: INFO: (9) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 4.156312ms) May 11 14:52:33.852: INFO: (9) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 4.213895ms) May 11 14:52:33.852: INFO: (9) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: test<... (200; 4.3071ms) May 11 14:52:33.852: INFO: (9) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 4.371886ms) May 11 14:52:33.853: INFO: (9) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 4.516641ms) May 11 14:52:33.853: INFO: (9) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname2/proxy/: tls qux (200; 4.584876ms) May 11 14:52:33.853: INFO: (9) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 4.567036ms) May 11 14:52:33.853: INFO: (9) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 4.761899ms) May 11 14:52:33.858: INFO: (10) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 4.772013ms) May 11 14:52:33.858: INFO: (10) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 4.835892ms) May 11 14:52:33.858: INFO: (10) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 5.039833ms) May 11 14:52:33.858: INFO: (10) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:460/proxy/: tls baz (200; 5.077261ms) May 11 14:52:33.858: INFO: (10) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 5.027291ms) May 11 14:52:33.858: INFO: (10) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: test (200; 5.195162ms) May 11 14:52:33.858: INFO: (10) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 5.164312ms) May 11 14:52:33.858: INFO: (10) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 5.599282ms) May 11 14:52:33.859: INFO: (10) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 6.022105ms) May 11 14:52:33.860: INFO: (10) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 6.621672ms) May 11 14:52:33.860: INFO: (10) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 6.720572ms) May 11 14:52:33.860: INFO: (10) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 7.062232ms) May 11 14:52:33.860: INFO: (10) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 7.149095ms) May 11 14:52:33.860: INFO: (10) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname2/proxy/: tls qux (200; 7.136763ms) May 11 14:52:33.863: INFO: (11) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 2.897425ms) May 11 14:52:33.867: INFO: (11) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 6.106839ms) May 11 14:52:33.867: INFO: (11) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 6.261109ms) May 11 14:52:33.867: INFO: (11) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: test (200; 6.417108ms) May 11 14:52:33.867: INFO: (11) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 6.551646ms) May 11 14:52:33.867: INFO: (11) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 6.629049ms) May 11 14:52:33.867: INFO: (11) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 7.116325ms) May 11 14:52:33.867: INFO: (11) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 6.90511ms) May 11 14:52:33.867: INFO: (11) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 6.878096ms) May 11 14:52:33.867: INFO: (11) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 6.771941ms) May 11 14:52:33.867: INFO: (11) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:460/proxy/: tls baz (200; 7.05741ms) May 11 14:52:33.867: INFO: (11) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 6.80513ms) May 11 14:52:33.868: INFO: (11) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname2/proxy/: tls qux (200; 7.104556ms) May 11 14:52:33.871: INFO: (12) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 2.925015ms) May 11 14:52:33.871: INFO: (12) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 3.20421ms) May 11 14:52:33.871: INFO: (12) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 3.25676ms) May 11 14:52:33.871: INFO: (12) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 3.505328ms) May 11 14:52:33.873: INFO: (12) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 5.101052ms) May 11 14:52:33.873: INFO: (12) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 5.699165ms) May 11 14:52:33.874: INFO: (12) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: test (200; 3.44289ms) May 11 14:52:33.881: INFO: (13) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:460/proxy/: tls baz (200; 3.880253ms) May 11 14:52:33.882: INFO: (13) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 4.236949ms) May 11 14:52:33.882: INFO: (13) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 4.319502ms) May 11 14:52:33.882: INFO: (13) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 4.530061ms) May 11 14:52:33.882: INFO: (13) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 4.811578ms) May 11 14:52:33.882: INFO: (13) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 4.862463ms) May 11 14:52:33.882: INFO: (13) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 5.145353ms) May 11 14:52:33.882: INFO: (13) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: test<... (200; 8.885497ms) May 11 14:52:33.886: INFO: (13) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 8.884844ms) May 11 14:52:33.887: INFO: (13) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname2/proxy/: tls qux (200; 9.333305ms) May 11 14:52:33.887: INFO: (13) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 9.486956ms) May 11 14:52:33.887: INFO: (13) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 9.412628ms) May 11 14:52:33.887: INFO: (13) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 9.545328ms) May 11 14:52:33.887: INFO: (13) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 9.649253ms) May 11 14:52:33.897: INFO: (14) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: test<... (200; 9.703011ms) May 11 14:52:33.897: INFO: (14) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 9.804138ms) May 11 14:52:33.897: INFO: (14) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 9.681432ms) May 11 14:52:33.897: INFO: (14) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 9.916609ms) May 11 14:52:33.897: INFO: (14) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 9.797171ms) May 11 14:52:33.899: INFO: (14) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 12.363622ms) May 11 14:52:33.900: INFO: (14) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 12.92403ms) May 11 14:52:33.900: INFO: (14) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 12.848028ms) May 11 14:52:33.900: INFO: (14) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:460/proxy/: tls baz (200; 12.982157ms) May 11 14:52:33.900: INFO: (14) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 12.873122ms) May 11 14:52:33.900: INFO: (14) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 12.975868ms) May 11 14:52:33.900: INFO: (14) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname2/proxy/: tls qux (200; 12.980073ms) May 11 14:52:33.900: INFO: (14) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 12.982085ms) May 11 14:52:33.909: INFO: (15) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 8.503386ms) May 11 14:52:33.909: INFO: (15) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: ... (200; 8.701195ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 16.586851ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 16.57ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:460/proxy/: tls baz (200; 16.577083ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 16.740382ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 16.769215ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 16.691744ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 16.708371ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 16.827781ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 16.724835ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 16.801288ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname2/proxy/: tls qux (200; 16.787913ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 16.84534ms) May 11 14:52:33.917: INFO: (15) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 16.856567ms) May 11 14:52:33.920: INFO: (16) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 2.575373ms) May 11 14:52:33.921: INFO: (16) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 4.066396ms) May 11 14:52:33.922: INFO: (16) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: ... (200; 5.586232ms) May 11 14:52:33.923: INFO: (16) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 5.700807ms) May 11 14:52:33.923: INFO: (16) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 5.544519ms) May 11 14:52:33.923: INFO: (16) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 5.704222ms) May 11 14:52:33.931: INFO: (17) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 7.527369ms) May 11 14:52:33.932: INFO: (17) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:460/proxy/: tls baz (200; 7.923186ms) May 11 14:52:33.932: INFO: (17) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 7.087004ms) May 11 14:52:33.932: INFO: (17) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname2/proxy/: tls qux (200; 7.502724ms) May 11 14:52:33.932: INFO: (17) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 8.243775ms) May 11 14:52:33.932: INFO: (17) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: ... (200; 7.446573ms) May 11 14:52:33.932: INFO: (17) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 6.910515ms) May 11 14:52:33.932: INFO: (17) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 8.476647ms) May 11 14:52:33.933: INFO: (17) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 9.621568ms) May 11 14:52:33.933: INFO: (17) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 10.192638ms) May 11 14:52:33.933: INFO: (17) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 10.119017ms) May 11 14:52:33.934: INFO: (17) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 10.159716ms) May 11 14:52:33.934: INFO: (17) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 10.310677ms) May 11 14:52:33.935: INFO: (17) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 11.307995ms) May 11 14:52:33.940: INFO: (18) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 4.259402ms) May 11 14:52:33.940: INFO: (18) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: test<... (200; 4.626761ms) May 11 14:52:33.940: INFO: (18) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 4.505002ms) May 11 14:52:33.942: INFO: (18) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:1080/proxy/: ... (200; 5.343414ms) May 11 14:52:33.942: INFO: (18) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 5.230707ms) May 11 14:52:33.942: INFO: (18) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 6.173682ms) May 11 14:52:33.942: INFO: (18) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:162/proxy/: bar (200; 6.145037ms) May 11 14:52:33.942: INFO: (18) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:460/proxy/: tls baz (200; 6.248869ms) May 11 14:52:33.943: INFO: (18) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 6.440496ms) May 11 14:52:33.943: INFO: (18) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 6.560641ms) May 11 14:52:33.943: INFO: (18) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 7.267255ms) May 11 14:52:33.943: INFO: (18) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname2/proxy/: tls qux (200; 7.077305ms) May 11 14:52:33.943: INFO: (18) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 8.022706ms) May 11 14:52:33.943: INFO: (18) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 7.446228ms) May 11 14:52:33.946: INFO: (19) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:160/proxy/: foo (200; 2.755194ms) May 11 14:52:33.946: INFO: (19) /api/v1/namespaces/proxy-6304/pods/http:proxy-service-wl756-z79h7:160/proxy/: foo (200; 2.720016ms) May 11 14:52:33.948: INFO: (19) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:162/proxy/: bar (200; 4.485416ms) May 11 14:52:33.948: INFO: (19) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7:1080/proxy/: test<... (200; 4.647994ms) May 11 14:52:33.948: INFO: (19) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:462/proxy/: tls qux (200; 4.95102ms) May 11 14:52:33.948: INFO: (19) /api/v1/namespaces/proxy-6304/pods/proxy-service-wl756-z79h7/proxy/: test (200; 4.981152ms) May 11 14:52:33.949: INFO: (19) /api/v1/namespaces/proxy-6304/pods/https:proxy-service-wl756-z79h7:443/proxy/: ... (200; 4.952093ms) May 11 14:52:33.950: INFO: (19) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname1/proxy/: foo (200; 6.883348ms) May 11 14:52:33.950: INFO: (19) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname2/proxy/: bar (200; 6.798421ms) May 11 14:52:33.950: INFO: (19) /api/v1/namespaces/proxy-6304/services/proxy-service-wl756:portname2/proxy/: bar (200; 6.850549ms) May 11 14:52:33.951: INFO: (19) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname2/proxy/: tls qux (200; 7.055141ms) May 11 14:52:33.951: INFO: (19) /api/v1/namespaces/proxy-6304/services/https:proxy-service-wl756:tlsportname1/proxy/: tls baz (200; 7.098165ms) May 11 14:52:33.951: INFO: (19) /api/v1/namespaces/proxy-6304/services/http:proxy-service-wl756:portname1/proxy/: foo (200; 7.149976ms) STEP: deleting ReplicationController proxy-service-wl756 in namespace proxy-6304, will wait for the garbage collector to delete the pods May 11 14:52:34.008: INFO: Deleting ReplicationController proxy-service-wl756 took: 5.842031ms May 11 14:52:34.309: INFO: Terminating ReplicationController proxy-service-wl756 pods took: 300.171516ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:52:36.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6304" for this suite. May 11 14:52:43.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:52:43.586: INFO: namespace proxy-6304 deletion completed in 6.762790417s • [SLOW TEST:16.882 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:52:43.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 11 14:53:01.964: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8665 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:53:01.964: INFO: >>> kubeConfig: /root/.kube/config I0511 14:53:01.992556 7 log.go:172] (0xc000e978c0) (0xc002acf2c0) Create stream I0511 14:53:01.992590 7 log.go:172] (0xc000e978c0) (0xc002acf2c0) Stream added, broadcasting: 1 I0511 14:53:01.994580 7 log.go:172] (0xc000e978c0) Reply frame received for 1 I0511 14:53:01.994615 7 log.go:172] (0xc000e978c0) (0xc0000ff2c0) Create stream I0511 14:53:01.994630 7 log.go:172] (0xc000e978c0) (0xc0000ff2c0) Stream added, broadcasting: 3 I0511 14:53:01.995309 7 log.go:172] (0xc000e978c0) Reply frame received for 3 I0511 14:53:01.995360 7 log.go:172] (0xc000e978c0) (0xc000673360) Create stream I0511 14:53:01.995376 7 log.go:172] (0xc000e978c0) (0xc000673360) Stream added, broadcasting: 5 I0511 14:53:01.996139 7 log.go:172] (0xc000e978c0) Reply frame received for 5 I0511 14:53:02.048488 7 log.go:172] (0xc000e978c0) Data frame received for 3 I0511 14:53:02.048509 7 log.go:172] (0xc0000ff2c0) (3) Data frame handling I0511 14:53:02.048516 7 log.go:172] (0xc0000ff2c0) (3) Data frame sent I0511 14:53:02.048520 7 log.go:172] (0xc000e978c0) Data frame received for 3 I0511 14:53:02.048526 7 log.go:172] (0xc0000ff2c0) (3) Data frame handling I0511 14:53:02.048551 7 log.go:172] (0xc000e978c0) Data frame received for 5 I0511 14:53:02.048567 7 log.go:172] (0xc000673360) (5) Data frame handling I0511 14:53:02.049861 7 log.go:172] (0xc000e978c0) Data frame received for 1 I0511 14:53:02.049898 7 log.go:172] (0xc002acf2c0) (1) Data frame handling I0511 14:53:02.049931 7 log.go:172] (0xc002acf2c0) (1) Data frame sent I0511 14:53:02.049959 7 log.go:172] (0xc000e978c0) (0xc002acf2c0) Stream removed, broadcasting: 1 I0511 14:53:02.049981 7 log.go:172] (0xc000e978c0) Go away received I0511 14:53:02.050067 7 log.go:172] (0xc000e978c0) (0xc002acf2c0) Stream removed, broadcasting: 1 I0511 14:53:02.050085 7 log.go:172] (0xc000e978c0) (0xc0000ff2c0) Stream removed, broadcasting: 3 I0511 14:53:02.050092 7 log.go:172] (0xc000e978c0) (0xc000673360) Stream removed, broadcasting: 5 May 11 14:53:02.050: INFO: Exec stderr: "" May 11 14:53:02.050: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8665 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:53:02.050: INFO: >>> kubeConfig: /root/.kube/config I0511 14:53:02.124499 7 log.go:172] (0xc0019f0420) (0xc00039f680) Create stream I0511 14:53:02.124535 7 log.go:172] (0xc0019f0420) (0xc00039f680) Stream added, broadcasting: 1 I0511 14:53:02.126867 7 log.go:172] (0xc0019f0420) Reply frame received for 1 I0511 14:53:02.126912 7 log.go:172] (0xc0019f0420) (0xc000673540) Create stream I0511 14:53:02.126930 7 log.go:172] (0xc0019f0420) (0xc000673540) Stream added, broadcasting: 3 I0511 14:53:02.127874 7 log.go:172] (0xc0019f0420) Reply frame received for 3 I0511 14:53:02.127903 7 log.go:172] (0xc0019f0420) (0xc002acf360) Create stream I0511 14:53:02.127912 7 log.go:172] (0xc0019f0420) (0xc002acf360) Stream added, broadcasting: 5 I0511 14:53:02.128758 7 log.go:172] (0xc0019f0420) Reply frame received for 5 I0511 14:53:02.201602 7 log.go:172] (0xc0019f0420) Data frame received for 3 I0511 14:53:02.201713 7 log.go:172] (0xc000673540) (3) Data frame handling I0511 14:53:02.201737 7 log.go:172] (0xc000673540) (3) Data frame sent I0511 14:53:02.201747 7 log.go:172] (0xc0019f0420) Data frame received for 3 I0511 14:53:02.201772 7 log.go:172] (0xc000673540) (3) Data frame handling I0511 14:53:02.202158 7 log.go:172] (0xc0019f0420) Data frame received for 5 I0511 14:53:02.202198 7 log.go:172] (0xc002acf360) (5) Data frame handling I0511 14:53:02.204192 7 log.go:172] (0xc0019f0420) Data frame received for 1 I0511 14:53:02.204215 7 log.go:172] (0xc00039f680) (1) Data frame handling I0511 14:53:02.204234 7 log.go:172] (0xc00039f680) (1) Data frame sent I0511 14:53:02.204249 7 log.go:172] (0xc0019f0420) (0xc00039f680) Stream removed, broadcasting: 1 I0511 14:53:02.204392 7 log.go:172] (0xc0019f0420) (0xc00039f680) Stream removed, broadcasting: 1 I0511 14:53:02.204411 7 log.go:172] (0xc0019f0420) (0xc000673540) Stream removed, broadcasting: 3 I0511 14:53:02.204580 7 log.go:172] (0xc0019f0420) (0xc002acf360) Stream removed, broadcasting: 5 May 11 14:53:02.204: INFO: Exec stderr: "" May 11 14:53:02.204: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8665 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:53:02.204: INFO: >>> kubeConfig: /root/.kube/config I0511 14:53:02.204858 7 log.go:172] (0xc0019f0420) Go away received I0511 14:53:02.226391 7 log.go:172] (0xc00001f290) (0xc000673e00) Create stream I0511 14:53:02.226418 7 log.go:172] (0xc00001f290) (0xc000673e00) Stream added, broadcasting: 1 I0511 14:53:02.227757 7 log.go:172] (0xc00001f290) Reply frame received for 1 I0511 14:53:02.227781 7 log.go:172] (0xc00001f290) (0xc00052e280) Create stream I0511 14:53:02.227788 7 log.go:172] (0xc00001f290) (0xc00052e280) Stream added, broadcasting: 3 I0511 14:53:02.228298 7 log.go:172] (0xc00001f290) Reply frame received for 3 I0511 14:53:02.228315 7 log.go:172] (0xc00001f290) (0xc00052e8c0) Create stream I0511 14:53:02.228321 7 log.go:172] (0xc00001f290) (0xc00052e8c0) Stream added, broadcasting: 5 I0511 14:53:02.228852 7 log.go:172] (0xc00001f290) Reply frame received for 5 I0511 14:53:02.300530 7 log.go:172] (0xc00001f290) Data frame received for 3 I0511 14:53:02.300558 7 log.go:172] (0xc00052e280) (3) Data frame handling I0511 14:53:02.300571 7 log.go:172] (0xc00052e280) (3) Data frame sent I0511 14:53:02.300598 7 log.go:172] (0xc00001f290) Data frame received for 3 I0511 14:53:02.300611 7 log.go:172] (0xc00052e280) (3) Data frame handling I0511 14:53:02.300625 7 log.go:172] (0xc00001f290) Data frame received for 5 I0511 14:53:02.300633 7 log.go:172] (0xc00052e8c0) (5) Data frame handling I0511 14:53:02.301869 7 log.go:172] (0xc00001f290) Data frame received for 1 I0511 14:53:02.301892 7 log.go:172] (0xc000673e00) (1) Data frame handling I0511 14:53:02.301913 7 log.go:172] (0xc000673e00) (1) Data frame sent I0511 14:53:02.302006 7 log.go:172] (0xc00001f290) (0xc000673e00) Stream removed, broadcasting: 1 I0511 14:53:02.302030 7 log.go:172] (0xc00001f290) Go away received I0511 14:53:02.302178 7 log.go:172] (0xc00001f290) (0xc000673e00) Stream removed, broadcasting: 1 I0511 14:53:02.302200 7 log.go:172] (0xc00001f290) (0xc00052e280) Stream removed, broadcasting: 3 I0511 14:53:02.302219 7 log.go:172] (0xc00001f290) (0xc00052e8c0) Stream removed, broadcasting: 5 May 11 14:53:02.302: INFO: Exec stderr: "" May 11 14:53:02.302: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8665 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:53:02.302: INFO: >>> kubeConfig: /root/.kube/config I0511 14:53:02.329844 7 log.go:172] (0xc001d98420) (0xc00052f400) Create stream I0511 14:53:02.329881 7 log.go:172] (0xc001d98420) (0xc00052f400) Stream added, broadcasting: 1 I0511 14:53:02.332499 7 log.go:172] (0xc001d98420) Reply frame received for 1 I0511 14:53:02.332542 7 log.go:172] (0xc001d98420) (0xc002acf400) Create stream I0511 14:53:02.332572 7 log.go:172] (0xc001d98420) (0xc002acf400) Stream added, broadcasting: 3 I0511 14:53:02.333722 7 log.go:172] (0xc001d98420) Reply frame received for 3 I0511 14:53:02.333769 7 log.go:172] (0xc001d98420) (0xc00052f4a0) Create stream I0511 14:53:02.333784 7 log.go:172] (0xc001d98420) (0xc00052f4a0) Stream added, broadcasting: 5 I0511 14:53:02.334566 7 log.go:172] (0xc001d98420) Reply frame received for 5 I0511 14:53:02.405449 7 log.go:172] (0xc001d98420) Data frame received for 5 I0511 14:53:02.405482 7 log.go:172] (0xc00052f4a0) (5) Data frame handling I0511 14:53:02.405506 7 log.go:172] (0xc001d98420) Data frame received for 3 I0511 14:53:02.405521 7 log.go:172] (0xc002acf400) (3) Data frame handling I0511 14:53:02.405530 7 log.go:172] (0xc002acf400) (3) Data frame sent I0511 14:53:02.405541 7 log.go:172] (0xc001d98420) Data frame received for 3 I0511 14:53:02.405546 7 log.go:172] (0xc002acf400) (3) Data frame handling I0511 14:53:02.406745 7 log.go:172] (0xc001d98420) Data frame received for 1 I0511 14:53:02.406779 7 log.go:172] (0xc00052f400) (1) Data frame handling I0511 14:53:02.406807 7 log.go:172] (0xc00052f400) (1) Data frame sent I0511 14:53:02.406831 7 log.go:172] (0xc001d98420) (0xc00052f400) Stream removed, broadcasting: 1 I0511 14:53:02.406861 7 log.go:172] (0xc001d98420) Go away received I0511 14:53:02.406968 7 log.go:172] (0xc001d98420) (0xc00052f400) Stream removed, broadcasting: 1 I0511 14:53:02.406996 7 log.go:172] (0xc001d98420) (0xc002acf400) Stream removed, broadcasting: 3 I0511 14:53:02.407012 7 log.go:172] (0xc001d98420) (0xc00052f4a0) Stream removed, broadcasting: 5 May 11 14:53:02.407: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 11 14:53:02.407: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8665 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:53:02.407: INFO: >>> kubeConfig: /root/.kube/config I0511 14:53:02.437253 7 log.go:172] (0xc001d98b00) (0xc00052fa40) Create stream I0511 14:53:02.437287 7 log.go:172] (0xc001d98b00) (0xc00052fa40) Stream added, broadcasting: 1 I0511 14:53:02.439280 7 log.go:172] (0xc001d98b00) Reply frame received for 1 I0511 14:53:02.439315 7 log.go:172] (0xc001d98b00) (0xc00039f720) Create stream I0511 14:53:02.439330 7 log.go:172] (0xc001d98b00) (0xc00039f720) Stream added, broadcasting: 3 I0511 14:53:02.440030 7 log.go:172] (0xc001d98b00) Reply frame received for 3 I0511 14:53:02.440060 7 log.go:172] (0xc001d98b00) (0xc002b2c0a0) Create stream I0511 14:53:02.440074 7 log.go:172] (0xc001d98b00) (0xc002b2c0a0) Stream added, broadcasting: 5 I0511 14:53:02.440850 7 log.go:172] (0xc001d98b00) Reply frame received for 5 I0511 14:53:02.508152 7 log.go:172] (0xc001d98b00) Data frame received for 3 I0511 14:53:02.508205 7 log.go:172] (0xc00039f720) (3) Data frame handling I0511 14:53:02.508221 7 log.go:172] (0xc00039f720) (3) Data frame sent I0511 14:53:02.508240 7 log.go:172] (0xc001d98b00) Data frame received for 3 I0511 14:53:02.508324 7 log.go:172] (0xc00039f720) (3) Data frame handling I0511 14:53:02.508353 7 log.go:172] (0xc001d98b00) Data frame received for 5 I0511 14:53:02.508400 7 log.go:172] (0xc002b2c0a0) (5) Data frame handling I0511 14:53:02.509407 7 log.go:172] (0xc001d98b00) Data frame received for 1 I0511 14:53:02.509427 7 log.go:172] (0xc00052fa40) (1) Data frame handling I0511 14:53:02.509437 7 log.go:172] (0xc00052fa40) (1) Data frame sent I0511 14:53:02.509456 7 log.go:172] (0xc001d98b00) (0xc00052fa40) Stream removed, broadcasting: 1 I0511 14:53:02.509480 7 log.go:172] (0xc001d98b00) Go away received I0511 14:53:02.509576 7 log.go:172] (0xc001d98b00) (0xc00052fa40) Stream removed, broadcasting: 1 I0511 14:53:02.509592 7 log.go:172] (0xc001d98b00) (0xc00039f720) Stream removed, broadcasting: 3 I0511 14:53:02.509602 7 log.go:172] (0xc001d98b00) (0xc002b2c0a0) Stream removed, broadcasting: 5 May 11 14:53:02.509: INFO: Exec stderr: "" May 11 14:53:02.509: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8665 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:53:02.509: INFO: >>> kubeConfig: /root/.kube/config I0511 14:53:02.536090 7 log.go:172] (0xc001d928f0) (0xc002b2c3c0) Create stream I0511 14:53:02.536142 7 log.go:172] (0xc001d928f0) (0xc002b2c3c0) Stream added, broadcasting: 1 I0511 14:53:02.538746 7 log.go:172] (0xc001d928f0) Reply frame received for 1 I0511 14:53:02.538819 7 log.go:172] (0xc001d928f0) (0xc002acf540) Create stream I0511 14:53:02.538834 7 log.go:172] (0xc001d928f0) (0xc002acf540) Stream added, broadcasting: 3 I0511 14:53:02.539795 7 log.go:172] (0xc001d928f0) Reply frame received for 3 I0511 14:53:02.539838 7 log.go:172] (0xc001d928f0) (0xc002b2c460) Create stream I0511 14:53:02.539851 7 log.go:172] (0xc001d928f0) (0xc002b2c460) Stream added, broadcasting: 5 I0511 14:53:02.540798 7 log.go:172] (0xc001d928f0) Reply frame received for 5 I0511 14:53:02.608282 7 log.go:172] (0xc001d928f0) Data frame received for 5 I0511 14:53:02.608340 7 log.go:172] (0xc002b2c460) (5) Data frame handling I0511 14:53:02.608372 7 log.go:172] (0xc001d928f0) Data frame received for 3 I0511 14:53:02.608392 7 log.go:172] (0xc002acf540) (3) Data frame handling I0511 14:53:02.608413 7 log.go:172] (0xc002acf540) (3) Data frame sent I0511 14:53:02.608428 7 log.go:172] (0xc001d928f0) Data frame received for 3 I0511 14:53:02.608436 7 log.go:172] (0xc002acf540) (3) Data frame handling I0511 14:53:02.609547 7 log.go:172] (0xc001d928f0) Data frame received for 1 I0511 14:53:02.609585 7 log.go:172] (0xc002b2c3c0) (1) Data frame handling I0511 14:53:02.609611 7 log.go:172] (0xc002b2c3c0) (1) Data frame sent I0511 14:53:02.609992 7 log.go:172] (0xc001d928f0) (0xc002b2c3c0) Stream removed, broadcasting: 1 I0511 14:53:02.610067 7 log.go:172] (0xc001d928f0) Go away received I0511 14:53:02.610216 7 log.go:172] (0xc001d928f0) (0xc002b2c3c0) Stream removed, broadcasting: 1 I0511 14:53:02.610234 7 log.go:172] (0xc001d928f0) (0xc002acf540) Stream removed, broadcasting: 3 I0511 14:53:02.610246 7 log.go:172] (0xc001d928f0) (0xc002b2c460) Stream removed, broadcasting: 5 May 11 14:53:02.610: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 11 14:53:02.610: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8665 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:53:02.610: INFO: >>> kubeConfig: /root/.kube/config I0511 14:53:02.634492 7 log.go:172] (0xc001d998c0) (0xc00052ff40) Create stream I0511 14:53:02.634513 7 log.go:172] (0xc001d998c0) (0xc00052ff40) Stream added, broadcasting: 1 I0511 14:53:02.636605 7 log.go:172] (0xc001d998c0) Reply frame received for 1 I0511 14:53:02.636645 7 log.go:172] (0xc001d998c0) (0xc002e6c0a0) Create stream I0511 14:53:02.636658 7 log.go:172] (0xc001d998c0) (0xc002e6c0a0) Stream added, broadcasting: 3 I0511 14:53:02.637461 7 log.go:172] (0xc001d998c0) Reply frame received for 3 I0511 14:53:02.637481 7 log.go:172] (0xc001d998c0) (0xc002e6c140) Create stream I0511 14:53:02.637488 7 log.go:172] (0xc001d998c0) (0xc002e6c140) Stream added, broadcasting: 5 I0511 14:53:02.638194 7 log.go:172] (0xc001d998c0) Reply frame received for 5 I0511 14:53:02.690728 7 log.go:172] (0xc001d998c0) Data frame received for 5 I0511 14:53:02.690782 7 log.go:172] (0xc002e6c140) (5) Data frame handling I0511 14:53:02.690814 7 log.go:172] (0xc001d998c0) Data frame received for 3 I0511 14:53:02.690839 7 log.go:172] (0xc002e6c0a0) (3) Data frame handling I0511 14:53:02.690876 7 log.go:172] (0xc002e6c0a0) (3) Data frame sent I0511 14:53:02.690890 7 log.go:172] (0xc001d998c0) Data frame received for 3 I0511 14:53:02.690899 7 log.go:172] (0xc002e6c0a0) (3) Data frame handling I0511 14:53:02.692316 7 log.go:172] (0xc001d998c0) Data frame received for 1 I0511 14:53:02.692336 7 log.go:172] (0xc00052ff40) (1) Data frame handling I0511 14:53:02.692351 7 log.go:172] (0xc00052ff40) (1) Data frame sent I0511 14:53:02.692397 7 log.go:172] (0xc001d998c0) (0xc00052ff40) Stream removed, broadcasting: 1 I0511 14:53:02.692439 7 log.go:172] (0xc001d998c0) Go away received I0511 14:53:02.692557 7 log.go:172] (0xc001d998c0) (0xc00052ff40) Stream removed, broadcasting: 1 I0511 14:53:02.692574 7 log.go:172] (0xc001d998c0) (0xc002e6c0a0) Stream removed, broadcasting: 3 I0511 14:53:02.692586 7 log.go:172] (0xc001d998c0) (0xc002e6c140) Stream removed, broadcasting: 5 May 11 14:53:02.692: INFO: Exec stderr: "" May 11 14:53:02.692: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8665 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:53:02.692: INFO: >>> kubeConfig: /root/.kube/config I0511 14:53:02.786561 7 log.go:172] (0xc0025e0160) (0xc002acf680) Create stream I0511 14:53:02.786603 7 log.go:172] (0xc0025e0160) (0xc002acf680) Stream added, broadcasting: 1 I0511 14:53:02.789779 7 log.go:172] (0xc0025e0160) Reply frame received for 1 I0511 14:53:02.789817 7 log.go:172] (0xc0025e0160) (0xc002e6c1e0) Create stream I0511 14:53:02.789828 7 log.go:172] (0xc0025e0160) (0xc002e6c1e0) Stream added, broadcasting: 3 I0511 14:53:02.790670 7 log.go:172] (0xc0025e0160) Reply frame received for 3 I0511 14:53:02.790701 7 log.go:172] (0xc0025e0160) (0xc00189a000) Create stream I0511 14:53:02.790710 7 log.go:172] (0xc0025e0160) (0xc00189a000) Stream added, broadcasting: 5 I0511 14:53:02.791531 7 log.go:172] (0xc0025e0160) Reply frame received for 5 I0511 14:53:02.848396 7 log.go:172] (0xc0025e0160) Data frame received for 5 I0511 14:53:02.848447 7 log.go:172] (0xc00189a000) (5) Data frame handling I0511 14:53:02.848479 7 log.go:172] (0xc0025e0160) Data frame received for 3 I0511 14:53:02.848535 7 log.go:172] (0xc002e6c1e0) (3) Data frame handling I0511 14:53:02.848567 7 log.go:172] (0xc002e6c1e0) (3) Data frame sent I0511 14:53:02.848583 7 log.go:172] (0xc0025e0160) Data frame received for 3 I0511 14:53:02.848596 7 log.go:172] (0xc002e6c1e0) (3) Data frame handling I0511 14:53:02.850027 7 log.go:172] (0xc0025e0160) Data frame received for 1 I0511 14:53:02.850087 7 log.go:172] (0xc002acf680) (1) Data frame handling I0511 14:53:02.850112 7 log.go:172] (0xc002acf680) (1) Data frame sent I0511 14:53:02.850126 7 log.go:172] (0xc0025e0160) (0xc002acf680) Stream removed, broadcasting: 1 I0511 14:53:02.850142 7 log.go:172] (0xc0025e0160) Go away received I0511 14:53:02.850287 7 log.go:172] (0xc0025e0160) (0xc002acf680) Stream removed, broadcasting: 1 I0511 14:53:02.850313 7 log.go:172] (0xc0025e0160) (0xc002e6c1e0) Stream removed, broadcasting: 3 I0511 14:53:02.850330 7 log.go:172] (0xc0025e0160) (0xc00189a000) Stream removed, broadcasting: 5 May 11 14:53:02.850: INFO: Exec stderr: "" May 11 14:53:02.850: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8665 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:53:02.850: INFO: >>> kubeConfig: /root/.kube/config I0511 14:53:02.874096 7 log.go:172] (0xc0019f1c30) (0xc0003ec500) Create stream I0511 14:53:02.874115 7 log.go:172] (0xc0019f1c30) (0xc0003ec500) Stream added, broadcasting: 1 I0511 14:53:02.878495 7 log.go:172] (0xc0019f1c30) Reply frame received for 1 I0511 14:53:02.878530 7 log.go:172] (0xc0019f1c30) (0xc0003ec960) Create stream I0511 14:53:02.878542 7 log.go:172] (0xc0019f1c30) (0xc0003ec960) Stream added, broadcasting: 3 I0511 14:53:02.879434 7 log.go:172] (0xc0019f1c30) Reply frame received for 3 I0511 14:53:02.879460 7 log.go:172] (0xc0019f1c30) (0xc002e6c280) Create stream I0511 14:53:02.879474 7 log.go:172] (0xc0019f1c30) (0xc002e6c280) Stream added, broadcasting: 5 I0511 14:53:02.882401 7 log.go:172] (0xc0019f1c30) Reply frame received for 5 I0511 14:53:02.932222 7 log.go:172] (0xc0019f1c30) Data frame received for 3 I0511 14:53:02.932277 7 log.go:172] (0xc0019f1c30) Data frame received for 5 I0511 14:53:02.932328 7 log.go:172] (0xc002e6c280) (5) Data frame handling I0511 14:53:02.932378 7 log.go:172] (0xc0003ec960) (3) Data frame handling I0511 14:53:02.932423 7 log.go:172] (0xc0003ec960) (3) Data frame sent I0511 14:53:02.932448 7 log.go:172] (0xc0019f1c30) Data frame received for 3 I0511 14:53:02.932471 7 log.go:172] (0xc0003ec960) (3) Data frame handling I0511 14:53:02.937523 7 log.go:172] (0xc0019f1c30) Data frame received for 1 I0511 14:53:02.937545 7 log.go:172] (0xc0003ec500) (1) Data frame handling I0511 14:53:02.937563 7 log.go:172] (0xc0003ec500) (1) Data frame sent I0511 14:53:02.937578 7 log.go:172] (0xc0019f1c30) (0xc0003ec500) Stream removed, broadcasting: 1 I0511 14:53:02.937594 7 log.go:172] (0xc0019f1c30) Go away received I0511 14:53:02.937810 7 log.go:172] (0xc0019f1c30) (0xc0003ec500) Stream removed, broadcasting: 1 I0511 14:53:02.937843 7 log.go:172] (0xc0019f1c30) (0xc0003ec960) Stream removed, broadcasting: 3 I0511 14:53:02.937861 7 log.go:172] (0xc0019f1c30) (0xc002e6c280) Stream removed, broadcasting: 5 May 11 14:53:02.937: INFO: Exec stderr: "" May 11 14:53:02.937: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8665 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:53:02.937: INFO: >>> kubeConfig: /root/.kube/config I0511 14:53:02.965472 7 log.go:172] (0xc002a686e0) (0xc0003ed400) Create stream I0511 14:53:02.965503 7 log.go:172] (0xc002a686e0) (0xc0003ed400) Stream added, broadcasting: 1 I0511 14:53:02.967244 7 log.go:172] (0xc002a686e0) Reply frame received for 1 I0511 14:53:02.967267 7 log.go:172] (0xc002a686e0) (0xc002e6c3c0) Create stream I0511 14:53:02.967276 7 log.go:172] (0xc002a686e0) (0xc002e6c3c0) Stream added, broadcasting: 3 I0511 14:53:02.968042 7 log.go:172] (0xc002a686e0) Reply frame received for 3 I0511 14:53:02.968069 7 log.go:172] (0xc002a686e0) (0xc002b2c500) Create stream I0511 14:53:02.968079 7 log.go:172] (0xc002a686e0) (0xc002b2c500) Stream added, broadcasting: 5 I0511 14:53:02.968819 7 log.go:172] (0xc002a686e0) Reply frame received for 5 I0511 14:53:03.033579 7 log.go:172] (0xc002a686e0) Data frame received for 5 I0511 14:53:03.033614 7 log.go:172] (0xc002b2c500) (5) Data frame handling I0511 14:53:03.033635 7 log.go:172] (0xc002a686e0) Data frame received for 3 I0511 14:53:03.033650 7 log.go:172] (0xc002e6c3c0) (3) Data frame handling I0511 14:53:03.033670 7 log.go:172] (0xc002e6c3c0) (3) Data frame sent I0511 14:53:03.033690 7 log.go:172] (0xc002a686e0) Data frame received for 3 I0511 14:53:03.033697 7 log.go:172] (0xc002e6c3c0) (3) Data frame handling I0511 14:53:03.035073 7 log.go:172] (0xc002a686e0) Data frame received for 1 I0511 14:53:03.035112 7 log.go:172] (0xc0003ed400) (1) Data frame handling I0511 14:53:03.035143 7 log.go:172] (0xc0003ed400) (1) Data frame sent I0511 14:53:03.035160 7 log.go:172] (0xc002a686e0) (0xc0003ed400) Stream removed, broadcasting: 1 I0511 14:53:03.035320 7 log.go:172] (0xc002a686e0) Go away received I0511 14:53:03.035373 7 log.go:172] (0xc002a686e0) (0xc0003ed400) Stream removed, broadcasting: 1 I0511 14:53:03.035417 7 log.go:172] (0xc002a686e0) (0xc002e6c3c0) Stream removed, broadcasting: 3 I0511 14:53:03.035429 7 log.go:172] (0xc002a686e0) (0xc002b2c500) Stream removed, broadcasting: 5 May 11 14:53:03.035: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:53:03.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8665" for this suite. May 11 14:54:11.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:54:11.551: INFO: namespace e2e-kubelet-etc-hosts-8665 deletion completed in 1m8.512414593s • [SLOW TEST:87.965 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:54:11.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 11 14:54:11.994: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix008336651/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:54:12.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7927" for this suite. May 11 14:54:18.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:54:18.234: INFO: namespace kubectl-7927 deletion completed in 6.144366305s • [SLOW TEST:6.682 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:54:18.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 14:54:18.430: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:18.477: INFO: Number of nodes with available pods: 0 May 11 14:54:18.477: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:20.030: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:20.227: INFO: Number of nodes with available pods: 0 May 11 14:54:20.227: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:20.483: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:20.488: INFO: Number of nodes with available pods: 0 May 11 14:54:20.488: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:21.485: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:21.550: INFO: Number of nodes with available pods: 0 May 11 14:54:21.550: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:22.482: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:22.486: INFO: Number of nodes with available pods: 0 May 11 14:54:22.486: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:23.509: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:23.524: INFO: Number of nodes with available pods: 0 May 11 14:54:23.524: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:24.857: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:25.161: INFO: Number of nodes with available pods: 2 May 11 14:54:25.161: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 11 14:54:25.490: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:25.493: INFO: Number of nodes with available pods: 1 May 11 14:54:25.493: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:26.616: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:26.621: INFO: Number of nodes with available pods: 1 May 11 14:54:26.621: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:27.567: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:27.747: INFO: Number of nodes with available pods: 1 May 11 14:54:27.747: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:28.498: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:28.502: INFO: Number of nodes with available pods: 1 May 11 14:54:28.502: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:29.497: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:29.499: INFO: Number of nodes with available pods: 1 May 11 14:54:29.499: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:30.498: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:30.502: INFO: Number of nodes with available pods: 1 May 11 14:54:30.502: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:31.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:31.631: INFO: Number of nodes with available pods: 1 May 11 14:54:31.631: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:32.922: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:32.969: INFO: Number of nodes with available pods: 1 May 11 14:54:32.969: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:33.498: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:33.500: INFO: Number of nodes with available pods: 1 May 11 14:54:33.500: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:34.516: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:34.519: INFO: Number of nodes with available pods: 1 May 11 14:54:34.519: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:35.497: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:35.499: INFO: Number of nodes with available pods: 1 May 11 14:54:35.499: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:36.498: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:36.501: INFO: Number of nodes with available pods: 1 May 11 14:54:36.501: INFO: Node iruya-worker is running more than one daemon pod May 11 14:54:37.499: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 14:54:37.501: INFO: Number of nodes with available pods: 2 May 11 14:54:37.501: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6910, will wait for the garbage collector to delete the pods May 11 14:54:37.561: INFO: Deleting DaemonSet.extensions daemon-set took: 5.800542ms May 11 14:54:37.861: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.20747ms May 11 14:54:53.662: INFO: Number of nodes with available pods: 0 May 11 14:54:53.662: INFO: Number of running nodes: 0, number of available pods: 0 May 11 14:54:53.664: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6910/daemonsets","resourceVersion":"10268754"},"items":null} May 11 14:54:53.667: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6910/pods","resourceVersion":"10268754"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:54:53.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6910" for this suite. May 11 14:55:01.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:55:02.072: INFO: namespace daemonsets-6910 deletion completed in 8.392385978s • [SLOW TEST:43.838 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:55:02.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 11 14:55:02.240: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87675477-2b6c-4b2c-9213-f7254ef75bba" in namespace "projected-4863" to be "success or failure" May 11 14:55:02.364: INFO: Pod "downwardapi-volume-87675477-2b6c-4b2c-9213-f7254ef75bba": Phase="Pending", Reason="", readiness=false. Elapsed: 123.493077ms May 11 14:55:04.368: INFO: Pod "downwardapi-volume-87675477-2b6c-4b2c-9213-f7254ef75bba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127987834s May 11 14:55:06.377: INFO: Pod "downwardapi-volume-87675477-2b6c-4b2c-9213-f7254ef75bba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136922092s May 11 14:55:08.497: INFO: Pod "downwardapi-volume-87675477-2b6c-4b2c-9213-f7254ef75bba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.256880708s STEP: Saw pod success May 11 14:55:08.497: INFO: Pod "downwardapi-volume-87675477-2b6c-4b2c-9213-f7254ef75bba" satisfied condition "success or failure" May 11 14:55:08.500: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-87675477-2b6c-4b2c-9213-f7254ef75bba container client-container: STEP: delete the pod May 11 14:55:08.623: INFO: Waiting for pod downwardapi-volume-87675477-2b6c-4b2c-9213-f7254ef75bba to disappear May 11 14:55:08.687: INFO: Pod downwardapi-volume-87675477-2b6c-4b2c-9213-f7254ef75bba no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:55:08.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4863" for this suite. May 11 14:55:14.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:55:14.930: INFO: namespace projected-4863 deletion completed in 6.238682087s • [SLOW TEST:12.857 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:55:14.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0511 14:55:28.559615 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 14:55:28.559: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:55:28.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6983" for this suite. May 11 14:55:41.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:55:41.214: INFO: namespace gc-6983 deletion completed in 12.619710823s • [SLOW TEST:26.284 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:55:41.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-f647c0a3-6efc-43bb-ade6-ab52a1f63f58 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:55:41.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2887" for this suite. May 11 14:55:47.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:55:47.891: INFO: namespace configmap-2887 deletion completed in 6.177788998s • [SLOW TEST:6.677 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:55:47.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9228 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 14:55:49.151: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 14:56:22.844: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.45:8080/dial?request=hostName&protocol=udp&host=10.244.2.44&port=8081&tries=1'] Namespace:pod-network-test-9228 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:56:22.844: INFO: >>> kubeConfig: /root/.kube/config I0511 14:56:22.877996 7 log.go:172] (0xc002c173f0) (0xc001892c80) Create stream I0511 14:56:22.878032 7 log.go:172] (0xc002c173f0) (0xc001892c80) Stream added, broadcasting: 1 I0511 14:56:22.880540 7 log.go:172] (0xc002c173f0) Reply frame received for 1 I0511 14:56:22.880594 7 log.go:172] (0xc002c173f0) (0xc00189ab40) Create stream I0511 14:56:22.880605 7 log.go:172] (0xc002c173f0) (0xc00189ab40) Stream added, broadcasting: 3 I0511 14:56:22.881885 7 log.go:172] (0xc002c173f0) Reply frame received for 3 I0511 14:56:22.881952 7 log.go:172] (0xc002c173f0) (0xc001892d20) Create stream I0511 14:56:22.881986 7 log.go:172] (0xc002c173f0) (0xc001892d20) Stream added, broadcasting: 5 I0511 14:56:22.882907 7 log.go:172] (0xc002c173f0) Reply frame received for 5 I0511 14:56:22.957026 7 log.go:172] (0xc002c173f0) Data frame received for 3 I0511 14:56:22.957053 7 log.go:172] (0xc00189ab40) (3) Data frame handling I0511 14:56:22.957073 7 log.go:172] (0xc00189ab40) (3) Data frame sent I0511 14:56:22.958559 7 log.go:172] (0xc002c173f0) Data frame received for 5 I0511 14:56:22.958610 7 log.go:172] (0xc001892d20) (5) Data frame handling I0511 14:56:22.958648 7 log.go:172] (0xc002c173f0) Data frame received for 3 I0511 14:56:22.958665 7 log.go:172] (0xc00189ab40) (3) Data frame handling I0511 14:56:22.960208 7 log.go:172] (0xc002c173f0) Data frame received for 1 I0511 14:56:22.960289 7 log.go:172] (0xc001892c80) (1) Data frame handling I0511 14:56:22.960319 7 log.go:172] (0xc001892c80) (1) Data frame sent I0511 14:56:22.960340 7 log.go:172] (0xc002c173f0) (0xc001892c80) Stream removed, broadcasting: 1 I0511 14:56:22.960384 7 log.go:172] (0xc002c173f0) Go away received I0511 14:56:22.960537 7 log.go:172] (0xc002c173f0) (0xc001892c80) Stream removed, broadcasting: 1 I0511 14:56:22.960585 7 log.go:172] (0xc002c173f0) (0xc00189ab40) Stream removed, broadcasting: 3 I0511 14:56:22.960612 7 log.go:172] (0xc002c173f0) (0xc001892d20) Stream removed, broadcasting: 5 May 11 14:56:22.960: INFO: Waiting for endpoints: map[] May 11 14:56:22.964: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.45:8080/dial?request=hostName&protocol=udp&host=10.244.1.112&port=8081&tries=1'] Namespace:pod-network-test-9228 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:56:22.964: INFO: >>> kubeConfig: /root/.kube/config I0511 14:56:23.013935 7 log.go:172] (0xc00282a370) (0xc001893360) Create stream I0511 14:56:23.013960 7 log.go:172] (0xc00282a370) (0xc001893360) Stream added, broadcasting: 1 I0511 14:56:23.016139 7 log.go:172] (0xc00282a370) Reply frame received for 1 I0511 14:56:23.016179 7 log.go:172] (0xc00282a370) (0xc002ccb900) Create stream I0511 14:56:23.016196 7 log.go:172] (0xc00282a370) (0xc002ccb900) Stream added, broadcasting: 3 I0511 14:56:23.016973 7 log.go:172] (0xc00282a370) Reply frame received for 3 I0511 14:56:23.016992 7 log.go:172] (0xc00282a370) (0xc0018934a0) Create stream I0511 14:56:23.016998 7 log.go:172] (0xc00282a370) (0xc0018934a0) Stream added, broadcasting: 5 I0511 14:56:23.018344 7 log.go:172] (0xc00282a370) Reply frame received for 5 I0511 14:56:23.088744 7 log.go:172] (0xc00282a370) Data frame received for 3 I0511 14:56:23.088847 7 log.go:172] (0xc002ccb900) (3) Data frame handling I0511 14:56:23.088931 7 log.go:172] (0xc002ccb900) (3) Data frame sent I0511 14:56:23.089077 7 log.go:172] (0xc00282a370) Data frame received for 3 I0511 14:56:23.089338 7 log.go:172] (0xc002ccb900) (3) Data frame handling I0511 14:56:23.089381 7 log.go:172] (0xc00282a370) Data frame received for 5 I0511 14:56:23.089409 7 log.go:172] (0xc0018934a0) (5) Data frame handling I0511 14:56:23.091695 7 log.go:172] (0xc00282a370) Data frame received for 1 I0511 14:56:23.091719 7 log.go:172] (0xc001893360) (1) Data frame handling I0511 14:56:23.091760 7 log.go:172] (0xc001893360) (1) Data frame sent I0511 14:56:23.091782 7 log.go:172] (0xc00282a370) (0xc001893360) Stream removed, broadcasting: 1 I0511 14:56:23.091888 7 log.go:172] (0xc00282a370) (0xc001893360) Stream removed, broadcasting: 1 I0511 14:56:23.091907 7 log.go:172] (0xc00282a370) (0xc002ccb900) Stream removed, broadcasting: 3 I0511 14:56:23.092055 7 log.go:172] (0xc00282a370) Go away received I0511 14:56:23.092092 7 log.go:172] (0xc00282a370) (0xc0018934a0) Stream removed, broadcasting: 5 May 11 14:56:23.092: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:56:23.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9228" for this suite. May 11 14:56:49.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:56:49.240: INFO: namespace pod-network-test-9228 deletion completed in 26.143200078s • [SLOW TEST:61.348 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:56:49.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 14:56:54.072: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2db26c5b-57b5-45e9-bb86-14c23f0c9d95" May 11 14:56:54.072: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2db26c5b-57b5-45e9-bb86-14c23f0c9d95" in namespace "pods-1376" to be "terminated due to deadline exceeded" May 11 14:56:54.128: INFO: Pod "pod-update-activedeadlineseconds-2db26c5b-57b5-45e9-bb86-14c23f0c9d95": Phase="Running", Reason="", readiness=true. Elapsed: 55.938528ms May 11 14:56:56.132: INFO: Pod "pod-update-activedeadlineseconds-2db26c5b-57b5-45e9-bb86-14c23f0c9d95": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.059694735s May 11 14:56:56.132: INFO: Pod "pod-update-activedeadlineseconds-2db26c5b-57b5-45e9-bb86-14c23f0c9d95" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:56:56.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1376" for this suite. May 11 14:57:02.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:57:02.299: INFO: namespace pods-1376 deletion completed in 6.162981174s • [SLOW TEST:13.059 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:57:02.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8704 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 14:57:02.363: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 14:57:30.549: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.113:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8704 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:57:30.549: INFO: >>> kubeConfig: /root/.kube/config I0511 14:57:30.637708 7 log.go:172] (0xc001c86630) (0xc00052fa40) Create stream I0511 14:57:30.637746 7 log.go:172] (0xc001c86630) (0xc00052fa40) Stream added, broadcasting: 1 I0511 14:57:30.639387 7 log.go:172] (0xc001c86630) Reply frame received for 1 I0511 14:57:30.639412 7 log.go:172] (0xc001c86630) (0xc00052fae0) Create stream I0511 14:57:30.639422 7 log.go:172] (0xc001c86630) (0xc00052fae0) Stream added, broadcasting: 3 I0511 14:57:30.640364 7 log.go:172] (0xc001c86630) Reply frame received for 3 I0511 14:57:30.640394 7 log.go:172] (0xc001c86630) (0xc00305a000) Create stream I0511 14:57:30.640408 7 log.go:172] (0xc001c86630) (0xc00305a000) Stream added, broadcasting: 5 I0511 14:57:30.641280 7 log.go:172] (0xc001c86630) Reply frame received for 5 I0511 14:57:30.718057 7 log.go:172] (0xc001c86630) Data frame received for 3 I0511 14:57:30.718087 7 log.go:172] (0xc00052fae0) (3) Data frame handling I0511 14:57:30.718100 7 log.go:172] (0xc00052fae0) (3) Data frame sent I0511 14:57:30.718251 7 log.go:172] (0xc001c86630) Data frame received for 5 I0511 14:57:30.718289 7 log.go:172] (0xc00305a000) (5) Data frame handling I0511 14:57:30.718325 7 log.go:172] (0xc001c86630) Data frame received for 3 I0511 14:57:30.718365 7 log.go:172] (0xc00052fae0) (3) Data frame handling I0511 14:57:30.720267 7 log.go:172] (0xc001c86630) Data frame received for 1 I0511 14:57:30.720334 7 log.go:172] (0xc00052fa40) (1) Data frame handling I0511 14:57:30.720398 7 log.go:172] (0xc00052fa40) (1) Data frame sent I0511 14:57:30.720430 7 log.go:172] (0xc001c86630) (0xc00052fa40) Stream removed, broadcasting: 1 I0511 14:57:30.720459 7 log.go:172] (0xc001c86630) Go away received I0511 14:57:30.720640 7 log.go:172] (0xc001c86630) (0xc00052fa40) Stream removed, broadcasting: 1 I0511 14:57:30.720725 7 log.go:172] (0xc001c86630) (0xc00052fae0) Stream removed, broadcasting: 3 I0511 14:57:30.720751 7 log.go:172] (0xc001c86630) (0xc00305a000) Stream removed, broadcasting: 5 May 11 14:57:30.720: INFO: Found all expected endpoints: [netserver-0] May 11 14:57:30.724: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.47:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8704 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 14:57:30.724: INFO: >>> kubeConfig: /root/.kube/config I0511 14:57:30.759934 7 log.go:172] (0xc00001f550) (0xc0034b2dc0) Create stream I0511 14:57:30.759984 7 log.go:172] (0xc00001f550) (0xc0034b2dc0) Stream added, broadcasting: 1 I0511 14:57:30.762462 7 log.go:172] (0xc00001f550) Reply frame received for 1 I0511 14:57:30.762532 7 log.go:172] (0xc00001f550) (0xc0009ccc80) Create stream I0511 14:57:30.762559 7 log.go:172] (0xc00001f550) (0xc0009ccc80) Stream added, broadcasting: 3 I0511 14:57:30.763504 7 log.go:172] (0xc00001f550) Reply frame received for 3 I0511 14:57:30.763558 7 log.go:172] (0xc00001f550) (0xc0009cce60) Create stream I0511 14:57:30.763572 7 log.go:172] (0xc00001f550) (0xc0009cce60) Stream added, broadcasting: 5 I0511 14:57:30.764436 7 log.go:172] (0xc00001f550) Reply frame received for 5 I0511 14:57:30.851390 7 log.go:172] (0xc00001f550) Data frame received for 5 I0511 14:57:30.851417 7 log.go:172] (0xc0009cce60) (5) Data frame handling I0511 14:57:30.851438 7 log.go:172] (0xc00001f550) Data frame received for 3 I0511 14:57:30.851443 7 log.go:172] (0xc0009ccc80) (3) Data frame handling I0511 14:57:30.851459 7 log.go:172] (0xc0009ccc80) (3) Data frame sent I0511 14:57:30.851464 7 log.go:172] (0xc00001f550) Data frame received for 3 I0511 14:57:30.851472 7 log.go:172] (0xc0009ccc80) (3) Data frame handling I0511 14:57:30.853011 7 log.go:172] (0xc00001f550) Data frame received for 1 I0511 14:57:30.853042 7 log.go:172] (0xc0034b2dc0) (1) Data frame handling I0511 14:57:30.853056 7 log.go:172] (0xc0034b2dc0) (1) Data frame sent I0511 14:57:30.853072 7 log.go:172] (0xc00001f550) (0xc0034b2dc0) Stream removed, broadcasting: 1 I0511 14:57:30.853094 7 log.go:172] (0xc00001f550) Go away received I0511 14:57:30.853368 7 log.go:172] (0xc00001f550) (0xc0034b2dc0) Stream removed, broadcasting: 1 I0511 14:57:30.853390 7 log.go:172] (0xc00001f550) (0xc0009ccc80) Stream removed, broadcasting: 3 I0511 14:57:30.853398 7 log.go:172] (0xc00001f550) (0xc0009cce60) Stream removed, broadcasting: 5 May 11 14:57:30.853: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:57:30.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8704" for this suite. May 11 14:57:56.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:57:56.928: INFO: namespace pod-network-test-8704 deletion completed in 26.070686165s • [SLOW TEST:54.629 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:57:56.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 14:58:05.127: INFO: Waiting up to 5m0s for pod "client-envvars-3aecf4ea-c512-4f1a-a3ef-b2d04d106600" in namespace "pods-2702" to be "success or failure" May 11 14:58:05.140: INFO: Pod "client-envvars-3aecf4ea-c512-4f1a-a3ef-b2d04d106600": Phase="Pending", Reason="", readiness=false. Elapsed: 13.846336ms May 11 14:58:07.145: INFO: Pod "client-envvars-3aecf4ea-c512-4f1a-a3ef-b2d04d106600": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018417403s May 11 14:58:09.358: INFO: Pod "client-envvars-3aecf4ea-c512-4f1a-a3ef-b2d04d106600": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23151944s May 11 14:58:11.362: INFO: Pod "client-envvars-3aecf4ea-c512-4f1a-a3ef-b2d04d106600": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.23511819s STEP: Saw pod success May 11 14:58:11.362: INFO: Pod "client-envvars-3aecf4ea-c512-4f1a-a3ef-b2d04d106600" satisfied condition "success or failure" May 11 14:58:11.364: INFO: Trying to get logs from node iruya-worker pod client-envvars-3aecf4ea-c512-4f1a-a3ef-b2d04d106600 container env3cont: STEP: delete the pod May 11 14:58:11.488: INFO: Waiting for pod client-envvars-3aecf4ea-c512-4f1a-a3ef-b2d04d106600 to disappear May 11 14:58:11.520: INFO: Pod client-envvars-3aecf4ea-c512-4f1a-a3ef-b2d04d106600 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:58:11.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2702" for this suite. May 11 14:58:53.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:58:53.635: INFO: namespace pods-2702 deletion completed in 42.111480437s • [SLOW TEST:56.707 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:58:53.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 14:58:53.858: INFO: Waiting up to 5m0s for pod "pod-d72f39d8-c758-4e61-8f31-d2100612132f" in namespace "emptydir-7458" to be "success or failure" May 11 14:58:53.895: INFO: Pod "pod-d72f39d8-c758-4e61-8f31-d2100612132f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.519782ms May 11 14:58:55.955: INFO: Pod "pod-d72f39d8-c758-4e61-8f31-d2100612132f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096613392s May 11 14:58:57.959: INFO: Pod "pod-d72f39d8-c758-4e61-8f31-d2100612132f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100679181s May 11 14:58:59.962: INFO: Pod "pod-d72f39d8-c758-4e61-8f31-d2100612132f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104350347s STEP: Saw pod success May 11 14:58:59.962: INFO: Pod "pod-d72f39d8-c758-4e61-8f31-d2100612132f" satisfied condition "success or failure" May 11 14:58:59.965: INFO: Trying to get logs from node iruya-worker pod pod-d72f39d8-c758-4e61-8f31-d2100612132f container test-container: STEP: delete the pod May 11 14:59:00.351: INFO: Waiting for pod pod-d72f39d8-c758-4e61-8f31-d2100612132f to disappear May 11 14:59:00.418: INFO: Pod pod-d72f39d8-c758-4e61-8f31-d2100612132f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:59:00.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7458" for this suite. May 11 14:59:06.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:59:06.750: INFO: namespace emptydir-7458 deletion completed in 6.328928722s • [SLOW TEST:13.114 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:59:06.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:59:13.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-768" for this suite. May 11 14:59:36.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 14:59:36.273: INFO: namespace replication-controller-768 deletion completed in 22.348987536s • [SLOW TEST:29.522 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 14:59:36.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 14:59:36.340: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 11 14:59:36.358: INFO: Pod name sample-pod: Found 0 pods out of 1 May 11 14:59:41.384: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 14:59:41.385: INFO: Creating deployment "test-rolling-update-deployment" May 11 14:59:41.389: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 11 14:59:41.410: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 11 14:59:43.562: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 11 14:59:43.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:59:45.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:59:47.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:59:49.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805988, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724805981, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 14:59:51.614: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 11 14:59:51.623: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6229,SelfLink:/apis/apps/v1/namespaces/deployment-6229/deployments/test-rolling-update-deployment,UID:eea8dcbc-4b5c-4a78-8a3f-7087d2c18f96,ResourceVersion:10269852,Generation:1,CreationTimestamp:2020-05-11 14:59:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-11 14:59:41 +0000 UTC 2020-05-11 14:59:41 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-11 14:59:49 +0000 UTC 2020-05-11 14:59:41 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 11 14:59:51.627: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6229,SelfLink:/apis/apps/v1/namespaces/deployment-6229/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:4a60e50c-e920-4db6-8302-50699c1e9f2b,ResourceVersion:10269838,Generation:1,CreationTimestamp:2020-05-11 14:59:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment eea8dcbc-4b5c-4a78-8a3f-7087d2c18f96 0xc002d480a7 0xc002d480a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 11 14:59:51.627: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 11 14:59:51.628: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6229,SelfLink:/apis/apps/v1/namespaces/deployment-6229/replicasets/test-rolling-update-controller,UID:e0c84090-5451-4c3b-a41e-7d2daf6b299b,ResourceVersion:10269850,Generation:2,CreationTimestamp:2020-05-11 14:59:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment eea8dcbc-4b5c-4a78-8a3f-7087d2c18f96 0xc0025e5fd7 0xc0025e5fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 14:59:51.631: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-4zkdf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-4zkdf,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6229,SelfLink:/api/v1/namespaces/deployment-6229/pods/test-rolling-update-deployment-79f6b9d75c-4zkdf,UID:a4ecc503-23c5-4dd6-8f51-6588f2148953,ResourceVersion:10269837,Generation:0,CreationTimestamp:2020-05-11 14:59:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 4a60e50c-e920-4db6-8302-50699c1e9f2b 0xc002c18017 0xc002c18018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4jv5c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4jv5c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4jv5c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c18090} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c180b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:59:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:59:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:59:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 14:59:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.52,StartTime:2020-05-11 14:59:41 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-11 14:59:48 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://d5b3bcc8c352329d5908b8f52efc1fbf56617a41a77350a7efe06fecf6c690a5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 14:59:51.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6229" for this suite. May 11 14:59:59.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 15:00:00.013: INFO: namespace deployment-6229 deletion completed in 8.377784144s • [SLOW TEST:23.740 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 15:00:00.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 11 15:00:00.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4756' May 11 15:00:00.228: INFO: stderr: "" May 11 15:00:00.228: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 11 15:00:05.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4756 -o json' May 11 15:00:05.370: INFO: stderr: "" May 11 15:00:05.370: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-11T15:00:00Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-4756\",\n \"resourceVersion\": \"10269914\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4756/pods/e2e-test-nginx-pod\",\n \"uid\": \"97f9681b-2479-4ace-ac56-d1510d4d68f1\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-clqnn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-clqnn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-clqnn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T15:00:00Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T15:00:03Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T15:00:03Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T15:00:00Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://8d761b0e623d512ac5302a195a01bbb624378bd13776a1489024739249d553b5\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-11T15:00:03Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.116\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-11T15:00:00Z\"\n }\n}\n" STEP: replace the image in the pod May 11 15:00:05.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4756' May 11 15:00:05.713: INFO: stderr: "" May 11 15:00:05.713: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 11 15:00:05.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4756' May 11 15:00:22.044: INFO: stderr: "" May 11 15:00:22.044: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 15:00:22.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4756" for this suite. May 11 15:00:28.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 15:00:28.760: INFO: namespace kubectl-4756 deletion completed in 6.237780702s • [SLOW TEST:28.747 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 15:00:28.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 11 15:00:29.009: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 11 15:00:34.170: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 15:00:34.170: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 11 15:00:34.812: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4622,SelfLink:/apis/apps/v1/namespaces/deployment-4622/deployments/test-cleanup-deployment,UID:4513c3b3-d123-41c6-bd1d-1a5dc97e1cbe,ResourceVersion:10270014,Generation:1,CreationTimestamp:2020-05-11 15:00:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 11 15:00:35.054: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4622,SelfLink:/apis/apps/v1/namespaces/deployment-4622/replicasets/test-cleanup-deployment-55bbcbc84c,UID:aede1790-ddb6-4546-9837-d44c32bcf2c0,ResourceVersion:10270017,Generation:1,CreationTimestamp:2020-05-11 15:00:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 4513c3b3-d123-41c6-bd1d-1a5dc97e1cbe 0xc00243e9a7 0xc00243e9a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 11 15:00:35.054: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 11 15:00:35.054: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-4622,SelfLink:/apis/apps/v1/namespaces/deployment-4622/replicasets/test-cleanup-controller,UID:3e97b38d-2f67-493a-80d0-95e19b6efd64,ResourceVersion:10270015,Generation:1,CreationTimestamp:2020-05-11 15:00:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 4513c3b3-d123-41c6-bd1d-1a5dc97e1cbe 0xc00243e8d7 0xc00243e8d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 11 15:00:35.603: INFO: Pod "test-cleanup-controller-9bwsn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-9bwsn,GenerateName:test-cleanup-controller-,Namespace:deployment-4622,SelfLink:/api/v1/namespaces/deployment-4622/pods/test-cleanup-controller-9bwsn,UID:ac36710b-7ef6-45e3-b381-856b55b67c14,ResourceVersion:10270012,Generation:0,CreationTimestamp:2020-05-11 15:00:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 3e97b38d-2f67-493a-80d0-95e19b6efd64 0xc00243f297 0xc00243f298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9c55q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9c55q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9c55q true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00243f310} {node.kubernetes.io/unreachable Exists NoExecute 0xc00243f330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:00:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:00:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:00:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:00:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.53,StartTime:2020-05-11 15:00:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-11 15:00:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://91f91fefeee10a195730bcb6368718b6eb6c8a1748b2826b561d58c5d1f41a41}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 11 15:00:35.603: INFO: Pod "test-cleanup-deployment-55bbcbc84c-mjt54" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-mjt54,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4622,SelfLink:/api/v1/namespaces/deployment-4622/pods/test-cleanup-deployment-55bbcbc84c-mjt54,UID:4ccfdac4-9517-4c10-a913-8ea39b60892d,ResourceVersion:10270021,Generation:0,CreationTimestamp:2020-05-11 15:00:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c aede1790-ddb6-4546-9837-d44c32bcf2c0 0xc00243f417 0xc00243f418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9c55q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9c55q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-9c55q true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00243f490} {node.kubernetes.io/unreachable Exists NoExecute 0xc00243f4b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 15:00:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 15:00:35.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4622" for this suite. May 11 15:00:46.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 15:00:46.629: INFO: namespace deployment-4622 deletion completed in 10.869202248s • [SLOW TEST:17.868 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 15:00:46.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4327 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 15:00:46.716: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 15:01:15.025: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.119:8080/dial?request=hostName&protocol=http&host=10.244.2.54&port=8080&tries=1'] Namespace:pod-network-test-4327 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 15:01:15.025: INFO: >>> kubeConfig: /root/.kube/config I0511 15:01:15.058454 7 log.go:172] (0xc002f209a0) (0xc002083cc0) Create stream I0511 15:01:15.058486 7 log.go:172] (0xc002f209a0) (0xc002083cc0) Stream added, broadcasting: 1 I0511 15:01:15.059812 7 log.go:172] (0xc002f209a0) Reply frame received for 1 I0511 15:01:15.059844 7 log.go:172] (0xc002f209a0) (0xc0011c3900) Create stream I0511 15:01:15.059852 7 log.go:172] (0xc002f209a0) (0xc0011c3900) Stream added, broadcasting: 3 I0511 15:01:15.060459 7 log.go:172] (0xc002f209a0) Reply frame received for 3 I0511 15:01:15.060482 7 log.go:172] (0xc002f209a0) (0xc002083e00) Create stream I0511 15:01:15.060490 7 log.go:172] (0xc002f209a0) (0xc002083e00) Stream added, broadcasting: 5 I0511 15:01:15.061073 7 log.go:172] (0xc002f209a0) Reply frame received for 5 I0511 15:01:15.123902 7 log.go:172] (0xc002f209a0) Data frame received for 3 I0511 15:01:15.123945 7 log.go:172] (0xc0011c3900) (3) Data frame handling I0511 15:01:15.123973 7 log.go:172] (0xc0011c3900) (3) Data frame sent I0511 15:01:15.124259 7 log.go:172] (0xc002f209a0) Data frame received for 5 I0511 15:01:15.124298 7 log.go:172] (0xc002083e00) (5) Data frame handling I0511 15:01:15.124321 7 log.go:172] (0xc002f209a0) Data frame received for 3 I0511 15:01:15.124341 7 log.go:172] (0xc0011c3900) (3) Data frame handling I0511 15:01:15.125665 7 log.go:172] (0xc002f209a0) Data frame received for 1 I0511 15:01:15.125684 7 log.go:172] (0xc002083cc0) (1) Data frame handling I0511 15:01:15.125693 7 log.go:172] (0xc002083cc0) (1) Data frame sent I0511 15:01:15.125710 7 log.go:172] (0xc002f209a0) (0xc002083cc0) Stream removed, broadcasting: 1 I0511 15:01:15.125815 7 log.go:172] (0xc002f209a0) (0xc002083cc0) Stream removed, broadcasting: 1 I0511 15:01:15.125835 7 log.go:172] (0xc002f209a0) (0xc0011c3900) Stream removed, broadcasting: 3 I0511 15:01:15.125879 7 log.go:172] (0xc002f209a0) Go away received I0511 15:01:15.125927 7 log.go:172] (0xc002f209a0) (0xc002083e00) Stream removed, broadcasting: 5 May 11 15:01:15.125: INFO: Waiting for endpoints: map[] May 11 15:01:15.128: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.119:8080/dial?request=hostName&protocol=http&host=10.244.1.118&port=8080&tries=1'] Namespace:pod-network-test-4327 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 15:01:15.128: INFO: >>> kubeConfig: /root/.kube/config I0511 15:01:15.156491 7 log.go:172] (0xc0015a88f0) (0xc00157a8c0) Create stream I0511 15:01:15.156533 7 log.go:172] (0xc0015a88f0) (0xc00157a8c0) Stream added, broadcasting: 1 I0511 15:01:15.158414 7 log.go:172] (0xc0015a88f0) Reply frame received for 1 I0511 15:01:15.158459 7 log.go:172] (0xc0015a88f0) (0xc002083f40) Create stream I0511 15:01:15.158475 7 log.go:172] (0xc0015a88f0) (0xc002083f40) Stream added, broadcasting: 3 I0511 15:01:15.159362 7 log.go:172] (0xc0015a88f0) Reply frame received for 3 I0511 15:01:15.159390 7 log.go:172] (0xc0015a88f0) (0xc001e90b40) Create stream I0511 15:01:15.159401 7 log.go:172] (0xc0015a88f0) (0xc001e90b40) Stream added, broadcasting: 5 I0511 15:01:15.160496 7 log.go:172] (0xc0015a88f0) Reply frame received for 5 I0511 15:01:15.221749 7 log.go:172] (0xc0015a88f0) Data frame received for 3 I0511 15:01:15.221780 7 log.go:172] (0xc002083f40) (3) Data frame handling I0511 15:01:15.221803 7 log.go:172] (0xc002083f40) (3) Data frame sent I0511 15:01:15.222281 7 log.go:172] (0xc0015a88f0) Data frame received for 5 I0511 15:01:15.222311 7 log.go:172] (0xc001e90b40) (5) Data frame handling I0511 15:01:15.222637 7 log.go:172] (0xc0015a88f0) Data frame received for 3 I0511 15:01:15.222652 7 log.go:172] (0xc002083f40) (3) Data frame handling I0511 15:01:15.223646 7 log.go:172] (0xc0015a88f0) Data frame received for 1 I0511 15:01:15.223661 7 log.go:172] (0xc00157a8c0) (1) Data frame handling I0511 15:01:15.223675 7 log.go:172] (0xc00157a8c0) (1) Data frame sent I0511 15:01:15.223692 7 log.go:172] (0xc0015a88f0) (0xc00157a8c0) Stream removed, broadcasting: 1 I0511 15:01:15.223742 7 log.go:172] (0xc0015a88f0) Go away received I0511 15:01:15.223783 7 log.go:172] (0xc0015a88f0) (0xc00157a8c0) Stream removed, broadcasting: 1 I0511 15:01:15.223805 7 log.go:172] (0xc0015a88f0) (0xc002083f40) Stream removed, broadcasting: 3 I0511 15:01:15.223817 7 log.go:172] (0xc0015a88f0) (0xc001e90b40) Stream removed, broadcasting: 5 May 11 15:01:15.223: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 15:01:15.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4327" for this suite. May 11 15:01:39.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 15:01:39.309: INFO: namespace pod-network-test-4327 deletion completed in 24.082101317s • [SLOW TEST:52.680 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 15:01:39.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 11 15:01:46.514: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 15:01:47.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-889" for this suite. May 11 15:02:09.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 15:02:10.031: INFO: namespace replicaset-889 deletion completed in 22.482821195s • [SLOW TEST:30.722 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 11 15:02:10.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-9c7d7ef0-36f3-41af-ac97-86f40db654f8 STEP: Creating configMap with name cm-test-opt-upd-98641ed4-e327-4350-9f4b-6645f05ef073 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9c7d7ef0-36f3-41af-ac97-86f40db654f8 STEP: Updating configmap cm-test-opt-upd-98641ed4-e327-4350-9f4b-6645f05ef073 STEP: Creating configMap with name cm-test-opt-create-13060012-9b80-43a3-8dd4-7c26a6de6ade STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 11 15:03:31.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3737" for this suite. May 11 15:03:56.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 11 15:03:56.103: INFO: namespace configmap-3737 deletion completed in 24.104553063s • [SLOW TEST:106.072 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 11 15:03:56.104: INFO: Running AfterSuite actions on all nodes May 11 15:03:56.104: INFO: Running AfterSuite actions on node 1 May 11 15:03:56.104: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 7683.391 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS